Another highly influential programming language not listed in the article is Self ( Although I’ve taken two graduate-level courses in programming language theory, I didn’t learn about Self until just a few years ago when I was bitten by the Smalltalk and Lisp bugs and started reading about the history of these languages and their environments. One of Self’s most significant contributions is the idea of prototype-based programming, which is the influence behind JavaScript’s traditional lack of classes (though newer versions of JavaScript have support for classes).

I don’t know if Self was ever commercialized; I do know that once Java was released Sun focused much of its attention on promoting Java at the exclusion of other object-oriented environments that Sun invested in (such as the OpenStep Objective-C API, which was actually jointly developed by NeXT and Sun). But Self is probably one of the most influential programming languages; it’s just a shame that this language was never brought up in any of the computer science courses I’ve taken.

Self has an even bigger legacy, the JVM is basically all the technology that made Self fast ported over to Java.
David Ungar pioneered not only Self and prototype based inheritance, but also generational garbage collection, and polymorphic inline caching.

Before Sun cannibalised the Self team into Java, it was actually slow as a snail.

Another fun fact, a reduced version of Self was used the programming language for most of the Apple Newton, because Dylan was taking too long to get off the Ground.

Ungar’s thesis is one of the most magnificent tours de force it’s ever been my pleasure to read. It’s mostly a series of “Here’s why this technique is impossible to implement efficiently, but never mind that; here’s how we implemented it efficiently anyway.”

Then Lars Bak (who worked on Self and HotSpot) used the techniques to make JavaScript fast with v8

This is the second time this week I’ve seen Lars Bak mentioned. I’m so glad to have learned about his work on virtual machines, it’s a direct line through the history of programming languages, from Self, Smalltalk, Java, V8 JavaScript engine, and Dart as well.

Having played with Self a bit before the Newton came out and then seeing it in NewtonScript, it was a very cool thing to watch the idea of system-wide prototype-based inheritance filter through the community. I think it was a bit too much for most people to grok, but the ability of a coder to modify only a few prototypes in the system soup to change the behavior of existing apps was absolutely amazing.

Contrast to nowadays where people mainly seem to view it as a source of bugs and security vulnerabilities. Which is true and reasonable, but a shame. I always like monkey-patching, at least in small personal projects. But if you want to distribute anything, you end up needing to go back and remove all of it. .

Something similar in thought (but less generic), was the concept of “datatypes” in AmigaOS. It meant software could seamlessly import and export data in formats even though it had no knowledge of its internal representation.

A word processor could include for instance PNG images even though PNG was not even invented when the word processor was created!

> Self has an even bigger legacy, the JVM is basically all the technology that made Self fast ported over to Java.

Java only became fast after the HotSpot server compiler, which was written by Cliff Click (and Micahael Paleczny / Christopher Vick) and takes a different approach than Smalltalk / Self compilers.

This. And while we’re at it, we shouldn’t discount awk which had a profound impact on Self (and ultimately JavaScript) syntax, in turn. So much so, in fact, that the following pointless code (which doesn’t even show JavaScript/awk regexp constants) is both JavaScript and awk:

    function f(x) {
      a[x] = "whatever"
      for (e in a)
        if (a[e] == x)
          return e

Edit: but awk doesn’t belong onto the list because it’s far from dead

There’s an interesting language strongly influenced by Self : Lisaac ( ). Lisaac is a purely prototype language extending the Self’s prototype paradigm with extremely good performance (close to C code, biggest part of the research around this language was about high performance code generated by the compiler).

For instance, you can define a dynamic inheritance : your inheritance can be defined by a function which return an object – so, a prototype – among several.

The author of this language explain me that Self was one of the most important day in his life 😉

I don’t think it did. Those of us who bothered to spend more than a glancing look at prototypes figured them out pretty easily. Including Classes in JavaScript feel like pandering: there really wasn’t much to directly gain from them, but they threw them in there anyways because the TC knew it’d gain JavaScript some popularity.

Wha? That’s a take.

Here’s mine: ES6 classes are wildly smart. They provide the benefits of prototypical inheritance–you can just reach into the thing and do what you want to do!–while making it way easier for many developers to read and parse in short order, while reducing the difficulty spike in moving to JavaScript (or, today, TypeScript).

“Pandering”. Right.

I can write old-style object prototypes. It makes my eyes bleed and makes me make more mistakes. So I’m just not gonna do that anymore.

“Let’s” not, but I don’t care about much of anything in that article. I haven’t written something that targets a browser that isn’t ES2015 since…2016.

It’s 2020. Things move on.

It depends somewhat on when you tried to learn prototypes in JS. The time before Firefox implementation detail `__proto__` leaked into general web platform usage had some rough spots, as the big obvious example to mind. It still took almost too long after that before `Object.getPrototypeOf()` and `Object.setPrototypeOf()` were standardized. Admittedly “the prototype of this instance” shouldn’t always be accessed and/or manipulated directly in “proper” prototype-oriented code, but there are a lot of practical cases that show up where it ends up being useful.

That’s interesting that the curriculum didn’t teach it! I went to a not terrific school and we learned about Self in undergrad P/L theory.

A lot of smaller no-reputation schools actually have pretty good teachers, compared to some of the bigger or more well known schools. In my case I transferred in my 3rd year from a school with 2k or so CS students to one with 150. Our professors were teachers, not researchers. We had their full attention. Compared to my experience at the larger and better known school where until you were a senior you were one of 150-250 students in the courses. They lectured, you spent time with TAs (very mixed results, I had good ones but mostly mediocre or awful ones who just wanted the check).

I’m surprised Clojure isn’t in the list either.

Edit: To clarify, Clojure is mostly a dead language that didn’t have any innovations by itself, but it did influenced many programmers(the creator is good at marketing). It helped push forward the FP mindset into the users of other mainstream languages(js, python, java).

This suggests we clarify what is meant for a language to be “mostly dead”. If we consider size compared to total market or trajectory we may ultimately conclude that a project for example Clojure is “mostly dead” compared to JavaScript or java.

However this isn’t ultimately the most useful metric. Would you for example select a restaurant based on the reviews and the cuisine or to total annual revenue of its parent company. If your metric suggests you forego surf and turf at a local diner in order to eat a Whopper you may be asking the wrong questions.

For your consideration here is a better one. A language is alive when its ecosystem is likely to receive enough interest and talent to continue to develop enough to allow its users to continue to accomplish useful goals. Being embedded in 2 of the top platforms and being able to reuse those libraries is a factor. Relying on these host platforms means that it can remain alive indefinitely while being useful to only thousands instead of millions as long as it drives enough interest to pay the salaries of the dozens who develop the core and the hundreds that write useful tools.

I think the mostly dead classification implies a decline, vs a non-mainstream mindshare of all developers.

Isn’t Clojure a relative newcomer for it to influence language design in general?

A stepping stone to more mainstream languages perhaps? I hardly saw a Clojure program so please elaborate.

It has less market share than perl or delphi (make your own judgement). It was really hyped(rightfully so, a practical lisp for production use? fun! sign me in) then declined fast.

Like most(ly dead) languages, it still has its followers, in the case of clojure, mostly a cultish group (my impression from r/clojure and other forums).

Can you explain and support the idea that the number users using the language productively declined? The state of Clojure survey seems to have held steady at 2500 from 2015 to 2020 with 60% saying they used it for work in 2015 vs 69% saying they used it for work in 2020.

Going back to 2010 we see less than 500 respondents and only 27% using it for work. A charitable assumption is that it grew substantially between 2010 and 2015 and held steady between 2015 and 2020.

I’m surprised by how hard this article is on Algol 68. It was pretty influential:

– Influenced C’s type system. I’m just going to quote Dennis Ritchie on this: “The scheme of type composition adopted by C owes considerable debt to Algol 68, although it did not, perhaps, emerge in a form that Algol’s adherents would approve of. The central notion I captured from Algol was a type structure based on atomic types (including structures), composed into arrays, pointers (references), and functions (procedures). Algol 68’s concept of unions and casts also had an influence that appeared later.”

– Influenced bash’s syntax (fi, esac)

– I’m not sure if this counts as an influence or not, but objections to Algol 68’s design lead Niklaus Wirth to revive his earlier proposal for a new version of Algol, called Algol W, and ultimately evolve it into Pascal.

That ALGOL-68 was hated by almost everyone who like previous ALGOL versions is not to be discounted.

ALGOL-68 was loved by people who would have never used ALGOL to begin with (and they didn’t start doing so with ALGOL-68).

It was difficult to write compilers for, slow & far too complex.

Looking back at ALGOL-68, it looks like comparatively small language compared to many of our current languages, e.g. Java, C++ and Python. I loved the definition of it, but never got to use it.

That modern languages are even worse does not make ALGOL-68 good, in my opinion. It’s understandable why people would like it, though, compared to modern languages.

It’s very similar to Go.

And like go, it’s loved by people who don’t know ALGOLs and hated otherwise.

That’s definitely an uncommon comparison.

Go is basically Oberon with C-ish syntax, and Oberon is probably the Wirthiest of Wirth languages, basically everything Wirth thought was good distilled into a single language. And given that Wirth’s whole family tree of languages exists because he objected to Algol 68 so hard he walked out on Algol…

Oberon-2 merged with Limbo to be more precise, actually.

Although Oberon not only had Oberon-2 as successor, it had Active Oberon, Oberon.NET, Zonnon, Component Pascal and Oberon-07, the later with multiple revisions.

Expecting to see Perl in an article like this in 5 to 10 years. Wasn’t the first language I learned, but there’s a fond place in my heart for it. At the time, it was the (only) less painful way to get at sockets, libc calls like getpwnam(), etc. I know the TIOBE index is flawed, but…ouch:

Perl is possibly the clearest example of the “Osborne effect”: announcing Perl6 took focus off Perl5, but it took far too long to arrive. So by the time it did many of the community had drifted away and the core use cases were becoming less relevant. Python almost did this to itself with the 2to3 transition.

An important difference is that Python 3.0 arrived soon, while it took ages for a complete Perl 6 release to be available. That is, if you didn’t want to learn or use Python 2.x because it was soon going to be obsolete, you already had a working alternative; but if you didn’t want to learn or use Perl 5.x because it was soon going to be obsolete, for a significant amount of time Perl 6 wasn’t an option.

> it took ages for a complete Perl 6 release to be available.

And unfortunately, the brilliant minds that worked on Perl 6 spent so much time rewriting the compiler tool chain and basically treating just that part as a research project, that it was incredibly difficult to actually contribute and push the project forward. Couple that with a spec that never seemed finished and a community that tore itself apart because of assholes, and it was a perfect storm to remove Perl from prominence.

But how how influential was it? It seems most of the unique ideas in Perl have not been adopted by other languages.

But if Perl can be credited with kick starting the dynamic language boom (Python, Ruby) then it have been massively influential.

I’d say it was pretty influential for Ruby, CGI-BIN, pcre, and other things that will live on in more refined forms. PHP has some obvious Perl influence as well.

Edit: Maybe also Perl’s Configure (crazy wide cross-platform portability) and CPAN. They were pretty ahead of their time.

While I agree that Perl influenced Ruby which I guess in turn influenced CoffeeScript and Elixir to some degree, the lineage of PHP seems to be dead. I don’t really know any languages that are inspired by PHP itself, other than maybe Hack which one could argue is very closely related to PHP.

Even if the lineage dies with PHP, that pulls in Facebook and WordPress.

Smarty could possibly take credit for inspiring some of the template engines out there also.

I still write and actually still (sorta-kinda) like PHP, but there’s no major concept/feature I can think of that’s intrinsic to PHP rather than inherited from other languages. The Perl (and general “C-like language” influence is notable, and as PHP has matured it’s started to feel ever more like Java. (“To write a simple PHP server app, first just initialize your PSR-11 compatible dependency injection container and add your PSR-7 compatible HTTP request/response handlers to it…”)

React(Js framework) is close to PHP as a concept. Facebook’s team had to take ideas from their exisitng code. With JSX, we could potentially call it a new language.

CPAN is probably the biggest influence, it is the ancestor of most package managers used today. But Perl has also spread its dialect of regular expression and the idea of regular expression literals, plus it was hugely influential on Ruby which then have lived on in Elixir and Crystal.

Also you may be correct about it kicking off the scripting languages’ popularity, and as someone else mentioned it is also responsible for cgibin.

Perl incorporated regular expressions into a general purpose programming language in a way that seemed fairly unique at the time. Today they seem to be everywhere, often with the help of PCRE-Perl compatible regexps.

The problem with perl and this kind of list is that in reality it’s just an elegant way of writing awk/sh scripts with proper flow control, i.e. like python today it was seen as an way to get around the compatibility and performance issues associated with shell/awk in the bad old days of commercial Unix.

It might sound strange to modern eyes but there was an time when Perl was considered elegant and robust, but then again very few young people have ever experienced the horrors of trying to write portable shell code for commercial Unix, perl was on the other hand almost completely identical regardless of what variant of Unix you happened to be running, and vastly faster then sh/awk at a time when even expensive systems could be slow by modern standards.

For modern day script writers Ruby and Python have all but replaced perl even though that’s not stopping the enterprise from keeping their perl codebases alive and kicking for the foreseeable future, but even there it’s being challenged from bellow by the fact that bash and gawk have become fairly universal on Linux systems and hardware fast enough that regex performance rarely matter.

In my mind what Perl brought to awk/sh was not flow control but rather data structures that did not make you want to kill yourself. Its standard library also delivered a lot of core sysadmin functionality back in the day and the reporting/output features fit very well with what was needed for early web CGI scripting.

I’d also give it some credit as a cautionary tale.

A lot of subsequent languages have focused on readability, partly as a reaction to some of the….unique things came up with in Perl.

Yeah, the idea that programming languages should be more like human languages seemed promising at the time. Perl showed us that we definitely does not want that.

Not sure I understand that one. I know Larry Wall is a linguist. But there’s not much about Perl, to me, that seems like a human language.

If I dig into the gripes about the syntax, it’s usually use of the implied $_ variable, wide use of sigils ($, @, %), derefs of complex data structures (hash of lists of hashes, etc) or regex syntax that people are talking about.

That is one of the things that are so brilliant about Perl. Tell a developer to create a programming language inspired by natural languages and you’ll get verbose crap like COBOL which superficially looks “natural” but is very far from how languages work.

Perl, on the other hand, looks just like a programming language would if it was the only way we could talk to computers, day in, day out:

– Implicit “it” variable,

– Very brief syntax for common operations,

– More than one way to say things,

– Context determines the meaning of things, and so on.

It’s not at all what one instinctively thinks of as “a programming language drawing from natural languages”, but on closer inspection (and daily use) that’s exactly what such a thing would look like!

FWIW, one of the reasons Perl 6 (now Raku) was started, was to fix the errors made in Perl, to be able to get an even more natural language. Even for difficult things:

  react {
    whenever signal(SIGINT) {
      say "Aborted";
    whenever {
      say "Timed Out"
  say "Goodbye";

This little program will wait for you to press Control-C or 3 seconds, whichever comes first. And say “Goodbye” on the way out.

Yeah, Wall knew what he was doing. He didn’t try to make it look like a human language, he tried to make it work more like a human language.

Another significant experiment in Perl was embracing the idea that everybody write in their own personal dialect or subset of the language. Again, the conclusion seem to be that this is a really bad idea since someone else will have to maintain the code eventually. But nevertheless it is valuable that the experiment have been tried. Python took some important lessons from that.

> That is one of the things that are so brilliant about Perl.

> – More than one way to say things

I respect that others may disagree, but I actually find this to be a major disadvantage in a programming language. From a readability perspective[1], having more than one way to express the same task is in my opinion a bad thing. As a trivial example, Perl supports both pre- and post-conditionals:

    if something then x = 1


    x = 1 if something

which I find super cumbersome to parse. In all cases, I have to keep the whole phrase in mind before I can understand it, because I don’t know which way the phrase will go. If there was only pre-conditionals, or only post-conditionals, then I would know how to parse each part of the phrase before I have read the entire phrase, which makes it easier to parse complex phrases.

I think this is one major reason languages like Perl or C++ are often considered hard to read. Having so many ways to express the same thing means a major mental load until you’ve figured out what is trying to be expressed.

[1] By now we’ve all come to agree that easy reading of code is far more important than easy writing of code, right?

Reading code is more important than writing it, but I’m not convinced that two ways to phrase the same conditional assignment is a bad thing. Or, rather, it might not be a bad thing that there are two ways to phrase the same conditional assignment.

See what I did there? Same meaning, different order of clauses. Emphasis ended up on different parts of the sentence! This is a very powerful out-of-band signaling path to control how the reader interprets the literal words, and being used to Perl where we also have it, it is weird to not have it in other languages.

Sometimes the actual predicate is the important/interesting bit, in which case putting it first makes sense: `if (user_is_underaged) return;`. Sometimes the predicate is not as interesting as the expression it’s conditioning: `say “message” if debug_mode;`.

You do realize that comes from english right?

    if $you-are-hungry { make-a-sandwich() }
    make-a-sandwich() if $you-are-hungry;

If you remove all of the non-letter characters, you are left with very understandable english sentences.

    if you are hungry make a sandwich
    make a sandwich if you are hungry

So unless english is a second language to you, it should be fairly easy to understand.

If you pay attention to how people use those different forms in english, you will also notice that the infix form of “if” tends to be used for simple short sentences. Which is exactly how I use it in Perl and Raku.

    sub factorial ( UInt $n ){
      return 1 if $n == 0;
      return 1 if $n == 1;
      return $n * factorial($n - 1)

Though I might consider using `when` instead.

    sub factorial ( UInt $_ ){
      return 1 when 0;
      return 1 when 1;
      return $_ * factorial($_ - 1)

Of course a junction would be useful

    sub factorial ( UInt $_ ){
      return 1 when 0|1;
      return $_ * factorial($_ - 1)

You are probably having fits with that to.

The thing is, that also reads fairly well in english.

    return one when [it is] zero or one

Often times in informal english the “it is” in such a sentence is left off for brevity. So I left it off, because we are obviously talking about “it” (`$_`). I mean, what else could we be talking about? I could easily see this being said as a response to another person.

    > Alice: What result should we give the user?
    > Bob: Return one when zero or one.
    > Otherwise multiply it by …

You are probably thinking that communicating with a computer should be more formal. You should also be wearing nicely ironed clothes with a jacket and tie.

The problem with that is that you aren’t communicating with a computer. You are communicating with everyone that is going to read your code. Reading a technical manual can be very tiring for even the most stoic of readers.

I prefer to read a well written novel. Good Perl and Raku code often reads more like a novel than a technical manual. Even when it is kept very precise about its semantics.

Which means that when I am done doing something in Perl or Raku, I want to continue doing more of that. I don’t want to stop.

Sometimes I will find myself re-reading the same line repeatedly at 3am.

Further, note how I used the infix form of “if”.

    return 1 if $n == 0;
    return 1 if $n == 1;

The result on the left is very simple. Not only is it simple, it is the same for both lines.

It is very common to use it in this manner. Where they sit at the very beginning of a function as a kind of guard clause. The real important bit is the right part of the lines. Which actually stands out more than the left half, because it is closer to the center of the screen.

After those two lines, we know two things about `$_`. It is neither a `0` or a `1`, because we already dealt with both cases. So we don’t have to worry about them in the rest of the function.

For the most part, when I see a line like that I know that I can safely skip over it. That is because it is almost only ever used for that type of thing. As a way to deal with simple cases early in the lifetime of a function. It also means that I can very quickly glean the information I need for that very same reason.

People tend to have a lot of bad things to say about Perl and Raku.

Almost everyone who has used them enough to get comfortable with either of the two languages would say just about the opposite to most of those things.

Basically, it’s bad in theory, but it’s good in practice.

Reminds me of a video “Clay Shirky on Love, Internet Style” (9m14s long)

Particularly this line:

> And it was at that moment that I understood what was going on. Because they didn’t care.

> They didn’t care that they had seen it work in practice, because they already knew it couldn’t work in theory.

I very much agree that it is far more important to be easy to read. Which is why I find Perl and Raku to be awesome.

I can write things in the most readable way for a person as possible, rather than the only way the compiler can understand.

> derefs of complex data structures

Am I the only one who finds perl data structures simple and consistent? Once I understood the difference between () and [] (or {} for hashes), it was easy (too easy, some might say!) to construct complex data structures.

When I started to learn other languages after learning bash then Perl first, I was really dismayed at how clumsy it was to build up a data structure of any complexity. So verbose in the Java/C++/C# land.

Having those constructs so effortlessly available with a minimal amount of syntax spoiled me. To this day, I would likely still prefer to do any kind of complicated ETL involving deeply nested structures with Perl.

Could partially be the TIMTOWTDI thing. People mixing:



And quoting or not the key, etc. Slices and individual elements, etc. Probably looks like line noise to outsiders when you have a long one using parens to grab a specific array element, combined with $$ and so forth.

Plus many years of Bash script intended one-liners to replace or augment awk/sed leading a lot of Perl Golfers to a sublanguage that makes APL look far saner in comparison. That the code golf then shows up in “production scripts” and libraries (if for no other reason than muscle memory) builds a fortress wall to readability by outsiders.

>But how how influential was it?

Raku (formerly known as “Perl 6”), PHP, Ruby and PowerShell were directly influenced by Perl.

I think it would be fair to give Perl that credit. When it was first released, there really wasn’t anything else like it. There was sh/csh, awk, sed, but nothing that really showed the path forward as Perl did.

That said, once Python 1.5.2 (or so) hit, Perl was done for. It was simply better, and there was no reasonable way for Perl to recover the lead, or even really coexist. Perl had a lot of momentum and took many years to decline, but that was the tipping point, in my mind.

It remains to be seen whether Python3 will displace Python2. 🙂

You do realize that there are more Perl programmers and more Perl code written today then there was back in ’99 when Python 1.5.2 was released.

Sure the percentage has gone down, but the count has gone up.

Perl consultants and distributors know that Perl is still widely used duct tape in the industry, and has even seen a slight upturn in popularity recently (although perhaps not as much as some other languages), when a new generation of developers has discovered how useful it can be. It’s just not something one talks about.

I like Perl a lot, it is the language I have used the most in my professional career. That being said, on my last two workplaces it happened to be in the list of forbidden technologies, not even being able to write a simple one liner and instead having to resort to using awk, sed and the like. It sadly suffers from underserved hate from developers who never used it.

s/underserved/undeserved/ eh? (I don’t mean to criticize, I’ve just been noticing a lot of malapropisms recently. I suspect some auto-correct is goofing up.)


Somewhat deserved, surely?

Perl is like the Continuum transfunctioner, “Its power is exceeded only by its mystery.” 😉

Can you expand on the “forbidden technologies”? What else was on the list, what were the penalties for entering forbidden territory? Could you use it on your own machine for research?

As someone who works at a company where Perl is used as duct tape, we’ve slowly but surely been ripping it out piece by piece and replacing it with Python (and by “we”, I mean “me”).

Much of our “duct tape” is unmaintainable code with undocumented aracne regexes everywhere, single-letter variables, and 1990s-era flow control. Nothing you’d find in Modern Perl is in our duct tape. Ultimately, the fastest turnaround on “we need feature X added to Perl script Y” is going to be “rewrite Y in Python, add feature X” because actually extending the Perl code has proven to be a nightmare.

(and as a side note, most of these Perl scripts ran out of cron on various boxen scattered across our network, and as I’ve been rewriting them I’ve also been removing them from cron and putting them in our centralized setup that uses Jenkins to run scripts, pulled out of git, inside Docker images on a schedule)

It sounds like bad code is your problem, and not Perl. Am I missing something?

I’m a Perl fan, and I’ll be the first to admit that something about Perl makes it easier to write idiosyncratic code that the writer understands, but the maintainer may not. The almost fanatical adherence to an official idiom one sees in Python is almost wholly lacking in Perl.

Perl was probably culturally influential – scripting glue – but as a language it’s almost uniquely eclectic. There are influences from all over, mostly shell and AWK but also from functional languages.

I have recently started to use it again as a tool for grepping across multiple lines. It’s distributed with every Linux. So it will survive. Maybe it’s already dead as a programming language though, certainly because Pascal is also considered dead.

I am sad that Forth did not make the list.

It showed how to make a minimal programming language that runs in a very small amount of space with just a stack. And did a lot to popularize RPN notation.

Even if you do not write in Forth, you can still benefit from knowing the ideas. For example I could not have written my answer at if I did not know the ideas of Forth.

I’m a younger millennial (born in early 90s) who got his start with BASIC. The first programming I did was on Q-BASIC on our aging desktop machine (386 with Windows 3.1) that we didn’t use for much, especially with very limited Internet use.

When I took programming in high school, we started with TI-BASIC on TI-83 calculators for about a month, as my programming teacher felt like this best replicated his experience learning programming on a TRS-80. I tend to agree, and this is a great use of BASIC. It’s the default programming interface on a widely used computer to this day.

We then moved to VB6 for our “serious” programming, although we also did JavaScript and Java.

My first programming professionally was done in an office setting at a temp job using VBA to help with some Excel work. And then my first job as a software engineer, even though I wasn’t writing it, did have some Visual Basic.NET floating around (most of my work was in C#).

I started to learn programming, by writing programs in TI-BASIC on the TI-84 (compatible with the 83) calculator. I just got it, because it was required to have a graphic calculator for math class.

It was just enough of a push. Small programs and games were fine and I actually got really used to the keyboard (I can still type on it pretty quickly these days). For longer programs I def. remember wanting to program with more monitor real estate and not having to rely on GOTOs. That’s how I started to learn Python, which I still use daily.

Same. My TI-BASIC magnum opus was 527 lines of spaghetti code, implementing dozens of nested menus for solving any trigonometry problem I was assigned. I honestly think that this is one of the best ways to introduce programming to a kid: “Hey, that math homework looks pretty tedious…wouldn’t it be nice if your calculator could do all the work for you?”

My story is similar to the OP’s. Started with TI-BASIC, and downgraded to Java for a programming class. 🙂

It turns out, you can actually write programs in TI-BASIC on a computer, and sync the file to the calculator with the usb connector.

I used to charge kids in my class for copies of my games (Snake, and a Zork-esque text adventure). a lot of early business lessons there, in retrospect.

My first language was Basic (first on an HP-2000 timesharing system and then on the Apple II), but my second language was 6502 assembly, something for which I will be forever grateful – with the groundwork laid by assembly, C was a wonderful step up, handling all the tedious parts of writing assembly while allowing nearly all the same precision and control – reading the original K&R book was a transcendent experience. After those, other languages were easy.

Great article, I love learning more about language influence. I was wondering something:

> Smalltalk wasn’t the only casualty of the “Javapocalypse”: Java also marginalized Eiffel, Ada95, and pretty much everything else in the OOP world. The interesting question isn’t “Why did Smalltalk die”, it’s “Why did C++ survive”. I think it’s because C++ had better C interop so was easier to extend into legacy systems.

Maybe it’s linked to performance? Even today some application are rewritten from Java to C++ (or clones are made) to gain performance (like with Cassandra and Scylla).

I have another theory about why Java beat other object-oriented programming languages except C++: cost. I can’t speak for Eiffel and Ada since I’m unfamiliar with those environments, but in the mid-1990s Java caused a revolution of sorts by providing free-as-in-beer runtimes and development tools that were available for download. I don’t know how good GNU G++ was in 1995, but I know that Borland’s Turbo C++ and Microsoft’s Visual C++ were affordably priced. By comparison, commercial Smalltalk implementations had expensive licensing fees, and in 1995 Squeak and Pharo didn’t exist. There was also OpenStep and Objective-C, but that was also very expensive; this 1996 article from CNet ( says that the Windows NT version of OpenStep cost $5,000 per developer and $25,000 for a version that allowed deployment.

With the high prices of Smalltalk and Objective-C environments, Java attracted a lot of companies and developers who wanted an object-oriented programming language that provided some of Smalltalk’s benefits (e.g., garbage collection, memory safety, a rich standard library) without having to shell out the cash for a Smalltalk implementation.

Cost was likely an issue for Eiffel adoption in the 1990’s.

There were a couple commercial implementations, with ISE being the dominant player. I bought a copy, but it wasn’t cheap.

The open source SmallEiffel (later SmartEiffel) compiler by Dominique Colnet (and others) from [1] didn’t implement implement some parts of the language until the late 1990’s.

It never had a big community.


But there was an affordable Smalltalk system in the early 1990s — Digitalk’s Smalltalk/V. It cost $99 and came with a huge manual that had a great tutorial. It introduced me (and lots of others) to the whole idea of object-orientation.

I’ve seen this several times. You need to get the stuff to the students who usually have no money to spare.

Emphasis on “NO”. Affordable doesn’t cut it.

If you can’t download it from somewhere for free, something else will be used by students that will later determine what they’ll use at their startups or companies.

Even better if it’s legal to download for free.

I’m not denying that the modern world of open source tooling and online documentation is better, but in the 1980s-early-1990s that’s just not how it was. Things like Turbo Pascal and Smalltalk/V cost money, but not that much, and were worth it because of the large printed manuals they included, which were needed because you couldn’t just Google things.

I don’t say that it wasn’t worth it.

I say that it was an additional barrier to entry which got more significant the later we are in the 90s. That it was free was a significant boost for the popularity of Java (probably also the free JVM from Microsoft was a significant contributor).

In the early 80s you always had a programming language for free with your computer and often, those manuals were not bad either as that was seen as an additional selling point for the hardware and that’s why the hardware producers did it.

Yes, which does not contradict his original point. Java arrived in the mid 90s, and unlike many of its competitors you could just download Java and use it for free which gave Java a competitive advantage. Universities had good Internet connections at that point.

It was probably too late (around 1996 to 97), but there was Smalltalk Express which was essentially Smalltalk/V for Windows that was released for free after ParcPlace and Digitalk merged. It never received any updates though and was only a Windows 3.1 program. It also came with a lengthy tutorial (actually a 16 chapter book) in HTML format.

Here it is running under Windows 10 (thanks to otvdm/winevdm that allows running 16bit programs in 64bit windows):

> download it from somewhere for free

The Smalltalk vendors provided student licenses / educator licenses.

Back in 1998, “…the largest object-oriented course in the world: the Open University’s new introduction to computing, for which over 5,000 students have enrolled for its first year. The course introduces computing from a systems-building stance specifically, through object technology, Smalltalk and our own adaptation and extension of Goldberg’s LearningWorks programming environment.”


I went to college in 1995… $99 would have likely been out of reach.

I had access to C/C++ in high school on a PC only because my father ran an engineering group and brought home a VC++ license for us. I didn’t get to use it that much before heading off to college.

I mostly used BASIC early on cause we had it free with the first 2-3 PCs we had. By High School I had gotten my hands on Turbo Pascal for free, maybe again through my father. High School classes were in Pascal. Turbo Pascal blazed and worked even on our 286 IIRC, which had minimal graphics capabilities. Even once we got a 486 TP was so lightweight compared to booting up windows that probably helped it’s popularity for people who were stuck on PCs at home.

As soon as I went to college I always had access to C/C++ and that’s what classes were taught in, and by Winter 95/96 we were all getting linux up and running and started having access to all the GNU stuff for doing our work.

Yes, that was the one I was using at the Smalltalk programming classes at the university.

I bought Turbo Pascal for Windows 1.5 (last release before Delphi) for about 150 euros, and Turbo C++ for Windows was about 200 euro, when converted for today’s money.

This in 1990 – 1992.

> …Java caused a revolution of sorts by providing free-as-in-beer…

Yes, when a well funded corporation gives away programming language runtimes and development tools — that puts language and development tool vendors out-of-business.

This is something else I’ve been thinking about lately as I read more about Smalltalk and Lisp. The people who have used the commercial Smalltalk and Lisp offerings from the 1980s and 1990s, including Lisp machines, seem to have had great experiences with these environments. They felt very productive in these environments, and they also lament the state of today’s development tools for more popular commercially-used languages.

However, it takes a large amount of money in order to develop something like a modern-day Symbolics Genera. Where is the money going to come from? There seems to be little room in today’s marketplace for modern-day equivalents of ParcPlace or Symbolics, or even Borland for that matter, since free tools are good enough for many developers. Inexpensive Unix workstations from Sun and DEC helped kill the Lisp machine, and Linux on ever-faster and ever-cheaper x86 PCs helped kill the Unix workstation market; good-enough-and-affordable seems to do better in the marketplace than polished-and-expensive. It seems to me that development tools and operating systems seem to be either the product of research organizations or are “subsidized” by companies where developer tools and operating systems are not their main businesses unless the operating system is a monopoly.

I don’t see an easy way around this, though. Maybe if someone like Alan Kay or Richard Gabriel visited a FAANG campus and convinced its engineers to build their infrastructure on top of a new object-based operating system, we’d finally get a modern, full-featured Smalltalk or Lisp operating system that’s at par with the commercial Smalltalks and Lisps from the 1980s and 1990s….

That is why many of us that experienced those environments ended up gravitating around Delphi, Android, iOS, Java and .NET tooling.

While not the same, it is the most mainstream stacks that are somehow close to those ones.

I can’t see how something like Pharo Smalltalk is anything near something like Java unless you’re comparing it to C++. Same thing with .NET. Those tools are all fine in their own way (high performance, extremely fine tuned garbage collectors, and extremely large libraries), but I can’t see them being as productive once you’re up to speed.

Moreover, it is much safer to start to learn a new language/environment if you can immediately try it for free and not invest first into an expensive implementation.

Ada was also very expensive up until relatively recently, so there could very well be something to what you’re saying.

“The GNAT project started in 1992 when the United States Air Force awarded New York University (NYU) a contract to build a free compiler for Ada to help with the Ada 9X standardization process. The 3-million-dollar contract required the use of the GNU GPL for all development, and assigned the copyright to the Free Software Foundation. The first official validation of GNAT occurred in 1995.”

I always thought that was a major miscalculation by the Ada project. All they thought about was defense contractors and their approach of relying on commercial implementions (priced for those contractors) made it unused in academic environments.

It is somewhat performance, but there are three other more important factors. (Java performance, not counting startup overhead, is fairly close to C++ now.)

One factor is determinism. GC introduces unpredictable (non-deterministic) time latency, making GC languages generally unsuitable for real time programming.

Another is size efficiency, Java programs have a well-deserved rep for using lots of memory. That isn’t a good mix for smaller/embedded systems. Java also carries along a large runtime, although Graal and other efforts are addressing this.

The final factor is that C++ is perceived (rightly or wrongly) to be an “improved C”. As the heir apparent to C, C++ has a ton of mindshare and momentum. Rust is now providing a major challenge, one I hope it wins! (Honorable mention to D, which is a nice language as well.)

In the mid 90s I wrote and supported a C++ program for moving around and processing data in a semiconductor factory. It ran on Windows and two different flavors of Unix. Half of my time was spent recompiling/relinking and trying to keep up with the upgrades of the few libraries I’ve used.
When Java came out it was so liberating – I’d write a new version of the software on my laptop and ship an update to a client with HP Unix and it would run without a hitch.
I doubt that there’s still a lot of use of C++ for entire applications. I suspect it’s mostly used in performance critical components.

Those rewrites don’t get much uptake in many enterprises, most are more open to just rewrite a couple of modules into native libraries and call them from Java, than throwing away the whole eco-system and respective tooling.

The APL section seems a little off. I downloaded the latest Dyalog version and the RIDE IDE and it was all very simple to use. Despite having zero APL experience I wrote a pretty nifty program to simulate something in my domain in like 3 lines of code. The keyboard thing is a non-issue as the IDE let’s you enter symbols and you quickly start to memorize that you can enter command-key r to enter rho and so forth. I’ve used the ASCII J before and find the APL glyphs help me learn what’s going on better. It’s right that it isn’t a popular language, but that is sad. Dyalog has a lot of tools too from web server, database libraries, R & Python Bridges, GUI…and so forth.

I think it’s fair to say that font issues were a significant reason for its decline in the late 80s and 90s (well before good unicode support). Other major factors were spreadsheets, which did many of the things APL was best at with an intuitive graphical interface, and OOP. APL didn’t have OOP, so it was for dinosaurs. Structured programming could have had the same effect—mainstream APLs picked up ifs and while loops 10-20 years later than the rest of the world—but I think the usability gap between APL and anything else for arrays was just too large at the time for that to hurt APL too much.

The article is wrong about mixing strings and numbers though. The requirement for homogeneous data, and the use of boxes, is a feature of the SHARP APL lineage, which includes J (although these languages could allow mixed arrays if they wanted to). But the APL2 and NARS families simply allow anything as an element of an array. These are much more popular, to the point that every recent APL I’m aware of uses this style of array. Possibly a reason why J wasn’t very successful; probably not a reason why APL usage disappeared.

See also

Thanks Marshall. I’ll yield to you in anything APL as I’m a total noob.

For those that don’t recognize his username, I think he works for Dyalog in APL implementation. Is that right?

It’s an interesting wonder if now that Unicode is much more ubiquitous, and people grow used to (and fond of) complex IMEs such as “emoji keyboards” and intricate ligatures of digraphs and trigraphs (fonts like Fira Code, Cascadia Code), if there will be an interesting APL resurgence or perhaps a modern Unicode “native” successor.

So it was originally designed by a Harvard Math professor (Kenneth Iverson) as just that. A new mathematical notation to write down. He didn’t get tenure (although later got the Turing award for some of that work) so he went to IBM and they actually implemented it. Aaron Hsu has done some blog posts on how he sometimes code using a nice calligraphic pen and paper with APL.…

I’m sorry Forth is not on that list. It could be argued that it was not so influential for mainstream languages, but there is a whole range of concatenative languages that can be considered direct descendants.

I think Forth’s influence is more subterranean. I.e. PostScript and coreboot, etc. Forth is there, but rarely in your face.

As far as the underlying concepts, speaking as someone who has experimented recently with Joy (one of the concatenative languages you mention. although it wasn’t directly influenced by Forth, I think, it’s a case of convergent design), I think it’s a shame it hasn’t been more influential.

The time may come: check out Conal Elliot’s Compiling to Categories.

Joy is a great language! I also wish it had been more influential.

Maybe you already knew this, but there are lots of great articles about Joy in, and there is also Thun: (which someone suggested me around here).

Yeah, also the only other language that is as simple at it’s core as Forth is Lisp. I suspect that lots of CS people have at some point written their own of either, or both, just for the experience.

Both ML and Smalltalk are definitely not dead, not even mostly dead, they’ve just evolved into language families whose members have different names. The ML family has Standard ML, OCaml, and Haskell, and the Smalltalk family has Pharo, GemStone, GNU Smalltalk, and others. These all may not have hugely wide adoption, but they are actively used in both academia and industry, and continue to grow and evolve, and their continued evolution is still influencing other languages.

Ok, you can argue about the definition of “mostly dead”, but whatever you decide these two just aren’t in the same category as the others on this list.

Haskell isn’t in the ML family. It is a descendant of Miranda and some other lazy functional languages.

I took two software engineering courses at MIT around 1990 that used CLU as the implementation language (one was a general medium-scale software engineering course, one was on writing compilers). I found it to be a very pleasant language, although the main thing I had to compare it to was C (C++ was just getting off the ground). It’s always nice to see it mentioned in lists like these.

> The interesting question isn’t “Why did Smalltalk die”, it’s “Why did C++ survive”. I think it’s because C++ had better C interop so was easier to extend into legacy systems.

Video Games. From around 1999 to recently, C++ was the language of game development. It only recently came under serious threat, from C#.

Very bad example.

Video games community is very luddite in what concerns adoption of technologies, they usually only move forward when the platform owners force them to do so.

Many moons ago, C, Pascal, Modula-2, Basic were seen as Unity is seen nowadays, naturally real game devs had to use Assembly, specially to take care of the special purpose graphic and sprite engines.

Playstation 1 was probably the first console that force them to start using C instead, while on 16 bit.

So slowly everyone accepted that doing games in C, Pascal and what have wasn’t that bad.

C++ only became kind of accepted years later, and even then it was more “C compiled with C++” than anything else.

The major boost for C++ were the OS vendors for home and office computers, Apple, IBM and Microsoft, alongside Zortech, Borland and Watcom, with their full stack C++ frameworks, something that ironically C++ has lost (OS C++ SDKs where it has the front role).

> Video games community is very luddite in what concerns adoption of technologies, they usually only move forward when the platform owners force them to do so.

I think this is an unfair assessment. “Real devs” had to use assembly language because when targeting consoles and low-end home computers, this was the only really performant option for a long time. For a lot of types of games this doesn’t matter so much, but there’s a long ongoing trend in big titles really trying to cram as much visual fidelity as possible into the target machine.

Same deal with C++ today. You can use Unity of course, no one is any less of a “real game dev” because of it, but it’s simply not an option if you want to work on cutting edge tech for AAA titles. There are tons of other things you may want to work on, other boundaries to push, which I think is why Unity is such a popular option. But I think it’s unfair to simply chalk it up to luddism. Present the available alternatives that compare favorably in terms of development speed and performance for non-critical software instead.

While it might feel unfair, I wasn’t attacking anyone on purpose, it is based on experience, from my demoscene days, to the friends I got to meet in the industry, to my former past as IGDA member, and the ways I have kept in touch with the industry, even though I have decided that the boring corporate world with graphics programming as hobby, was something I rather spend my time on than the typical studio life.

While the demands of gaming industry have always driven the hardware evolution on mainstream computing, most studios only move to newer programing languages when the platform owners force them to do so.

The gaming industry is not known for being early adopters of new software stacks, and many studios would to this day actually use pure C instead of C++ if the console vendors would give them C based SDKs.

> While the demands of gaming industry have always driven the hardware evolution on mainstream computing, most studios only move to newer programing languages when the platform owners force them to do so.

My point is that it isn’t a question of new vs old. It’s a question of fast vs slow. There was a reluctance to adopt e.g. Pascal because the popular implementations favored convenience of implementation (UCSD Pascal, which generated code for a virtual machine) or speed of compilation (Turbo Pascal, a single pass compiler) over the quality of the generated code. For long, it was the case that C compilers generated code that couldn’t nearly measure up with hand-written assembly.

I’ve seen plenty of old games and demos written in C and Pascal. Almost always using these languages as organization frameworks for executing snippets of in-line assembly where speed actually mattered.

So what are the alternatives to C++ today? A lot of game developers use Unity and write code in C#. Unity itself is of course written almost entirely in C++. Rust? Well, if you can figure out exactly when memory is freed, which Rust can make a bit of a puzzle. Zig seems like it could be a nice contender, at some point in the future. Swift? If you can accept the cost of reference counting.

All these are great options IMO, just perhaps not for the low-level work that goes on at the big high tech game studios. The closest thing to a contender is maybe Rust. The game industry’s reluctance to adopt Rust is hardly unique to them.

I know several studios local to Vancouver that are making heavy use of Go, Swift or Rust. And of course, a few that are deep into HTML5. There are games being shipped written in mruby.

I’m not in agreement with your assessment that the industry is conservative; it is largely responsible for pushing graphics into programmable pipelines, for instance, and the vendor-preffered language lock-in for consoles hasn’t really been a factor since the Indie revolution took the industry by storm.

I don’t take indies into consideration on my remark, consoles and mobile OS is where the money is, and none of those languages have a place there currently, with exception of Swift on iOS.

Kotlin, Swift and Javascript are in plenty of mobile games. There are loads of HTML5 games on Steam. There are Switch games written in Ruby.

I get the impression you’re an outsider looking in, relying on decades out of date personal experience to understand what they see.

Those are indie games.

Naturally Kotlin and Swift are in mobile games, they are part of the user space official SDK languages

I know for a fact that teams at EA are using such tech, as are teams at Microsoft and others.

Also, thumbing your nose at Indie games is odd, considering the sales they’ve enjoyed and the extent to which the industry has adjusted to adapt to their surge in popularity.

I can’t imagine Nintendo in the 90s treating indie devs the way it treats them today.

Everyone is using Swift and Kotlin when targeting iOS and Android, it is either them or Objective-C and Java.

C++ doesn’t have full access to OS APIs and some integration is always required.

Swift, Kotlin, Objective-C, Java are unavoidable when doing iOS and Android development, the OS features that are exposed to C and C++ aren’tt enough for doing a game.

Adoption is driven by platform owners.

I don’t think it’s a very bad example at all, it’s just not the only reason C++ “didn’t die out”.

I worked as a C++ programmer from something like 2001 – 2010, and all video games companies used it. I wouldn’t have learned how to program properly in it without the games experience I gained.

My introduction to C++, was Turbo C++ 1.0 for MS-DOS, in 1992.

Until 2001, you had Mac OS, Windows, OS/2, BeOS, EPOC, Telligent, ill fated Coplad, SOM, COM, CORBA, Newton all using C++ as the main programming language for enterprise applications.

The argument is why C++ hasn’t died, and the video games industry has undoubtably contributed to its staying power. Most of the things you list have replaced C++ with other languages.

I am not saying it hasn’t contributed, rather that it hasn’t been a critical factor to keep C++ around, the OS SDKs have been it.

Had Apple, Microsoft, Google decided otherwise and game community opinions wouldn’t matter.

> game community opinions wouldn’t matter.

That’s ludicrous. Because of game industry demand for programmable graphics pipelines we now have the modern AI industry. You’re welcome.

Because of game industry demand for high-performance, low-latency programming languages C++ stuck around. You’ve scoffed that it’s basically used as C-in-a-C++-compiler, but that’s because the steering committee seems to be completely out of touch with what people want C++ to do.

Worth watching:

Sure, game devs ignore modern C++ and the STL, but they do so because modern C++ and the STL, as defined, are not zero-cost abstractions. If we wanted a language with a predictable 30% overhead we’d use C#, and we do, but when it matters we want something where the core language is without serious burden in its use.

Why use C++ if zero-cost abstractions aren’t important to you? Honestly, what’s the value? You’ve already given up frames, you may as well accept some to handle memory management, continuous garbage collection, ownership safety and so on.

Programmable graphics pipelines were originally done at Pixar with Renderman, and used in Hollywood movies like Toy Story, definitly not a game related community.

Yes the games industry demand for better hardware has driven mainstream computing to adopt them.

And naturally we got shader Assembly, followed up by C dialects like Cg and 3DLabs initial GLSL implementation.

C++ on the GPUs happened thanks to CUDA and C++AMP.

Even Vulkan would keep being a bare bones C API if it wasn’t for NVidia’s initial use of C++ on their samples SDK.

Hardly any programming language innovation being done by gaming companies, with exception of snowflakes like Naughty Dog.

How is it a bad example of something which maintains the popularity of C++ when you acknowledge that C++ has been in use in the time period I mentioned, for that industry?

A few years after the Playstation gained prominence would be around 1999; and shortly thereafter Unreal Engine took the industry by storm.

EASTL is how old, now?

I still remember Carmack’s comments of his initial forays into using C++ for idtech.

I don’t think it’s fair to shrug us off.

Also Visual C++ and Visual Studio. I worked on desktop software from around 2007-2010, not that long ago; all our software line was built in Visual C++ with MFC.

It’s hard to believe how fast things changed and desktop apps were replaced by web apps…

Not everywhere, I am now back at doing Web related UIs, but have managed to keep doing desktop stuff every now and then.

There are domains, like laboratory automation devices in life sciences where browsers are persona non grata in air gaped environments, plus they lack the necessary hardware integration.

So between daemon + browser and a proper desktop application, most customers still rather have a nice desktop application.

“It’s hard to believe how fast things changed and desktop apps were replaced by web apps..”

Mostly business type front ends and it actually mostly makes sense for that purpose. But there are huge areas where browsers play no role.

I worked on Windows desktop apps around the same timeframe, maybe a bit later. IMO, C#/Winforms killed C++/MFC, due to being massively simpler and more productive, without giving up much in the way of performance.

Why did C++ survive the advent of Java? One reason: the mainstream JVM, HotSpot, is written in C++. As such, the production Java ecosystem is not self-hosting and relies on C++ to be sustained.

(Writing a JVM in just Java is doable, and some research JVMs have done it, but that approach hasn’t yet been fully productionised.)

There are probably >100,000 c++ devs out there, and ~100 of them work on HotSpot. I don’t think the HotSpot devs are significant in encouraging c++ usage

I think it demonstrates the point – that if Java is not able to be fully self-hosting in practice, then there are significant gaps in its abilities – and those same gaps in its abilities that drove HotSpot devs to implement HotSpot itself in C++ instead of Java also drive other projects to choose C++ over Java.

I don’t really buy into the fact that enterprise apps are written in C++ because gamedev uses it

No, while C interop was useful it was mostly because it was easier to write low latency code in C++.

> CLU might be the most influential language that nobody’s ever heard of. Iterators? CLU. Abstract data types? CLU. Generics? CLU. Checked exceptions? CLU.

Well, I learned something new and I spend a lot of time reading about PL (and even teach undergrad PL!). Had never heard of CLU before.

I’m still trying to reconcile “most influential” with “nobody’s ever heard of”. It may be that CLU implemented in premiere some concepts, but if later those concepts may as well just have been rediscovered (like the iterator design pattern) as the obvious thing to do, then what that leaves us?

In this case there’s a lot of evidence at least the language designers were paying attention to (Dr.) Barbara Liskov’s published academic work over the decades, even if academic papers/languages rarely make it into mainstream consciousness. At a brief glance it looks like an Erdos number game of a sort, in that CLU papers were referenced a lot by other articles, then those articles were referenced by a lot more. There is probably a lot of one or two degree separation.

One specific dot this article helped me connect (thanks article) was that Barbara Liskov of CLU is the same namesake of Liskov’s Substitution Principle, which definitely has made broader waves in mainstream OOP consciousness among programmers. It originated in a paper two decades after a lot of the CLU work so it isn’t directly a part of CLU’s own influence on the world, but it is quite probable that CLU’s influence is deep inside the formation/elucidation of the Principle in that Barbara Liskov was clearly thinking about the problem area for decades, and in some cases decades ahead of colleagues and practical applications.

> reconcile “most influential” with “nobody’s ever heard of”

We ignore the past. I’ve worked with people who hadn’t heard of Alan Kay. I worked with a guy tasked to revamp an expert system who had never heard of Prolog.

I work with a lot of people who write software and a few professional developers (~10-20 years experience in .NET & Java) and only one of them (has a computer science degree) has ever heard of Prolog and neither had ever heard of Smalltalk, APL, Forth, or Lisp when I brought them up around lunch. They’re great at what they do and are much more experienced/talented than I at creating software, but it always makes me wonder why more professional software developers aren’t more curious about the past and other alternative solutions. I can tell you how engineers did my job each decade going about 100 years back and how all the tools evolved over that time.

Generics, contrary to common wisdom it was CLU that actually introduced checked exceptions not Java, alongside Mesa one of the first modular languages.

Glad to see a mention of CLU; it pioneered a lot of ideas. Reading Joe Duffy’s blogs about the Midori project (greenfield language/system design at Microsoft), he mentions CLU as an inspiration a number of times.

Despite having a CS degree I had never heard of CLU, which is a shame. Author is Barbara Liskov of the Liskov Substitution Principle, influential in early OO design.

Interesting to see Pascal on the list – it was the language of choice in my high school and my first programming language, back in 2010. I haven’t touched it since, but I do remember it was pretty easy for me to pickup the syntax, especially compared to languages I would learn later in uni, like say C.

In Lithuania and perhaps in similar countries Pascal is still main programming language in classrooms. Also no other language has beaten Pascal in number of released books so it will stay for very long time.

I almost started a new retrocomputing-related project in pascal recently because FreePascal seems to be the only stable maintained toolchain that still can target 8086

I can confirm! My high school was in Slovakia and I know other people had the same experience with Pascal.

I still write almost all my code in Pascal

It compiles very fast to native code, has properties with implicit setters/getters, null-safe strings with mutability xor aliasing, function and operator overloading.

It is really amazing

Wait, Pascal was in use in 2010? I thought it was a dead language when I used it in high school in 1997-1998.

Pascal has been in an incredibly strange state of being both dead and alive simultaneously for the past thirty years.

…but yeah, it was definitely doing pretty good throughout the 1990s.

“That is not dead which can eternal lie, / And with strange aeons even languages may die.”

Weird right? I learned Pascal in highschool in Ontario around 2010 as well. I remember being really frustrated at the time because we weren’t learning some modern language like Java or Python. Instead we were stuck with Turbo Pascal, writing in an IDE so old it only supported 8 character long filenames. Thank god we had an amazing teacher which ended up making it one of the best CS classes I’ve ever taken.

I know Turing (which is pascal packed up with a graphics library and manual and editor all in one exe) was in use in Ontario Schools as late as 2010.

Well, depending if you count Free Pascal and Delphi (aka Object Pascal) or only Wirth’s original Pascal, it is still used today.

I switch quite often. Sometimes 3 languages a day: Delphi/Lazarus/FreePascal, C++ and Javascript so it now takes few minutes to adjust.

I try not to switch too often, just because I find it hard to remember the details. The one that always stumps me the most is comments — I always stare for several seconds trying to remember how comments work in that language.

Oh, I completely agree. I do not switch because I want to, but because I have to.

Very surprised that ColdFusion isn’t on this list. Say what you want about it, but remember this… You know how all you React, VueJS and frontend developers just LOOOOOVE components… Yeah… ColdFusion had that first in what was called “Custom Tags”. Heck… It had alot of features that you find today in almost every other language, but it is dying a SLOOOOOOOOW death.

ColdFusion had much more limited exposure to the general world than say PHP. Also, arguably and IMNSHO, ColdFusion was quite bad at a lot of its own features. Custom Tags specifically as the given example, were barely more than duct tape over SSI (Server-Side Includes) and never really had a “proper” Component model or DOM for most (if not all, again IMNSHO) of its history.

(CF is a language on my list of hopes to never see again.)

It’s questionable (although certainly possible) whether ColdFusion’s custom tags inspired custom tags in other languages. There were a lot of other elements in the air at the time, such as server side include syntax, that made the idea of incorporating programming logic into markup a logic next step. And then from there it isn’t too far to get to custom tags, of course, it’s hard to retrospectively estimate the amount of brain effort to leap between concepts, so maybe my assessment is way off, but this is where finding direct documentary links between how technology developed can be helpful.

You are correct that ColdFusion pioneered some modern features, but the author had a specific definition of influence, and states: “Before we start, a quick primer on finding influence. Just knowing that X was the first language with feature Z doesn’t mean that X actually influenced Z.”, and then goes on to state that influence only counts if it exists in citations of papers or documentation.

You can absolutely go to town arguing whether or not their definition of influence makes sense, though.

There are a few things present in COBOL that I would like to see lifted into more modern languages. One is the use of the full stop as an expression terminator, as in English, instead of the semicolon. Another is the whole “picture” mechanism for number formats.

More languages should also steal Verilog’s use of _ as an optional digit separator. It’s just arrived in C# 7.

> One is the use of the full stop as an expression terminator, as in English, instead of the semicolon.

That’s something people complain about when looking at Erlang code. Thanks to its Prolog heritage, it uses commas, semi-colons, and full stops to terminate statements.

It’s actually very easy to remember which is which, despite the general angst, because they are directly analogous to those punctuation marks’ usage in English.

Commas indicate a continuation of statements, semi-colons separate clauses within a function, and full stops complete a function definition.

“_” has been an option to separate digits in Perl for ages. I use it all the time if the integer is >10,000

I also liked the “corresponding” modifier for verbs like “move” and “add”. It caused the operation to only be done on those fields which existed in both the source and destination and ignored the others.
Do any other languages have this?

If we start using periods to end expressions, would we then have to use semicolons to chain things together? Or would you let the compiler know that foo.size(). was okay and that it shouldn’t be waiting around for another function call there?

Hum… Prolog uses periods to end declarations, and as you expect, uses semicolons as combinators.

It reads really well, because it also uses colons as combinators, with higher precedence than the semicolon. But it does not map well into other paradigms.

Juxtaposition (`foo size()`) is a serious contender I think. In fact, I consider it is one of the most undervalued syntactic choices.

In HS I built a toy language where it was `foo’s size()`, if you want to play even further down the English-like rabbit hole, even trying to stick to BNF-ish CFGs. `’s` is actually a surprisingly unambiguous token in a language without a “char type” and that sticks to double quotes only for strings.

As cool as E was, it was almost entirely based on core ideas lifted from Joule and KeyKOS. I can’t really see any influence beyond a “wouldn’t it be great if language X had feature Y like you find in E”.

I was also thinking of the community, as they moved out into other languages and carried some ideas with them that they maybe got exposed to in the E community.

Are there active languages influenced by E? I came across that erights site years ago, and I always meant to learn more about it.

Promises for async results has been around for decades and E got the idea from Joule.

The ML section misses out some significant successors: there was a fork between CAML and SML. Ocaml is still widely used, and has its own spawn such as ReasonML and BuckleScript.

Neither ReasonML nor BuckleScript are really OCaml descendants. ReasonML is a different syntax for OCaml, with even closer match than CoffeeScript 1.x had to JS. Compiled ReasonML code cannot be distinguished from code that was written in OCaml. BuckleScript OTOH is a set of forks of older compiler versions which have a JS backend. It works with both Reason and OCaml syntax.

ReasonML therefore is much closer to the revised OCaml syntax (a failed previous alternate syntax) than it is to its own language.

> At a time when adding two lists of numbers meant a map or a loop, APL introduced the idea of operating on the entire array at once.

Am I missing something, how is this different from a map?

It keeps going!

      3 3 3 ⍴ ⍳10
     1  2 3
     4  5 6
     7  8 9
    10  1 2
     3  4 5
     6  7 8
     9 10 1
     2  3 4
     5  6 7
      M ← 3 3 3 ⍴ ⍳10
    1.00000E0  4.00000E0  2.70000E1
    2.56000E2  3.12500E3  4.66560E4
    8.23543E5  1.67772E7  3.87420E8
    1.00000E10 1.00000E0  4.00000E0
    2.70000E1  2.56000E2  3.12500E3
    4.66560E4  8.23543E5  1.67772E7
    3.87420E8  1.00000E10 1.00000E0
    4.00000E0  2.70000E1  2.56000E2
    3.12500E3  4.66560E4  8.23543E5

That’s 3 nested 3×3 matrices, and we take the pointwise product of each of them with just the usual multiplication operator.

How many maps did you want?

Incidentally this property is why most of the “translations” of APL expressions into another language you’ll see online are cheating- they assume a specific rank for arguments and “bake in” an appropriate degree of mapping.

APL by default operates on entire arrays even if they’re multi-dimensional like a matrix. You can just say matrix x 3 and it will automatically do the nested for-loops behind the scenes.

The article is about “mostly dead” languages. Lisp lives on under many forms, Common Lisp, Clojure and Racket being some of the more popular ones.

If we’re only talking about influential languages, I’d agree with you and put lisp on the top.

For all the mentions of FORTRAN in the description of other languages, it’s a real surprise that FORTRAN doesn’t rate a mention of its own — though there are corners of the world where it’s still in use.

It’s notable not only for being the Lingua Franca of scientific computing through at least the ’80s, but also for breaking ground in programming language technology, starting right at the beginning — the Fortran I compiler, for the IBM 704 in 1957, was the first to have an optimizer (and the technical papers on that compiler introduced terminology which has since become standard to the field — e.g. “basic block”).

If you leave the HN/SV bubble, you find huge amounts of Fortran. It’s still plays a significant role in scientific computing. If you lost track of it in the ’80s, you might be interested that the language had major updates in ’90, ’95, ’03, ’08 and 2018. It’s just not a part of the world of cat pix sharing websites and innumerable “Foo of Bar” startups.

Same with COBOL.

One thing I credit COBOL with is the use of descriptive variable naming. In FORTRAN you see variables named i, j, x etc (just look at Numerical Recipes) but in COBOL you see TOTAL_MONTH_SALES and the like. It is considered good style and “self-documenting” to do that today.

COBOL was intended from the start to be “self-documenting” and English-like with the thinking being that it could be used by ‘non-programmers’ to create business logic. Not sure that I buy it achieved that goal, but it was the intent.

That was an incredibly interesting article. I never quite grasped the historical significance of CLU.

Something I always wonder about COBOL, since it has so little influence on other languages, is if there are good ideas that we missed out on.

ABAP which runs in all of SAP systems in Fortune 500 companies as an ERP is derived from COBOL. The core product has 400Million lines. Extensions would be 4X

Guess online and efficient.

Done my 10k lines batch COBOL. It is a waste of life. Verbose. C is better but unfortunately it is not very io. Still …

batch COBOL with mainframe parallel io sub-processor helps a lot.

May be the cics where 1000+ users can use a pc (3-5 mips even emulated) with 16MB is something unusual.

No one is doing anything new in COBOL but there is so much old stuff that could only be replaced by a massive investment in rebuilding infrastructure. COBOL isn’t dying anytime soon.

At one point, like 10-15 years ago, I knew experienced well paid COBOL programmers being laid off and being replaced by kids fresh out of school. And then CS programs, if not outright stopped, at least greatly reduced teaching COBOL courses. And no one coming out of school learning Java and Python and Node wants to write COBOL.

And then old timer COBOL programmers started retiring, but companies (especially banks) were not replacing their existing mainframe infrastructure. So now you have a gap between supply (COBOL programmers) and demand ( mostly banks), to the point that retired programmers are doing part time work for $200 an hour. Here’s an article from 4 years ago;…

I know at several dozen companies in my state alone who are still “doing something new” in COBOL, including the company I currently work for. No, that’s not just maintenance, it’s new apps and projects. Most of these are on a ‘Fortune’ list in terms of size.

Startups and small companies can ignore anything that isn’t new and shiny; companies processing billions of dollars/transaction know better.

Pascal have two moments which really peak. Turbo pascal and Apple Mac. After that not sure what its use. Ucsd pascal always slow.

Does anyone know whether Turbo Pascal DOS/Windows compiler produced slower code than C compilers at the time? For a long time I was sticking with Pascal and resisted switching to C, even when everyone around me was doing so. One thing I do seem to remember from that time was that the things people were coding up in C seemed to work faster (not sure if it was Borland’s Turbo C or something else). It was one of the arguments I remember for leaving Pascal.

Am I misremembering and it was just an urban legend/marketing from C compilers? Thinking back it doesn’t seem logical that C would offer any significant performance benefits – both Pascal and C were compiled down to machine code and had approximately the same levels of abstraction.

While it is for Windows and not DOS, some time ago i wrote a small raytracing benchmark for C and Pascal (coded pretty much the same between the two languages) and the Borland compilers for C++ and Delphi (Object Pascal) had pretty much the same results:

What was probably the case is that there were more C compilers so some of them produced better code than Borland’s.

Urban legend, as Borland fanboy, using their Pascal and C++ products, the generated binaries were pretty much the same, even if you might had to play with compiler pragmas, like disabling bounds checking for example.

In MS-DOS days, if you actually cared about performance, a macro Assembler was the only way to achieve it.

Borland Pascal had built in assembly which was very easy to use and integrated with the rest of the language. This let me have DOS program that had it’s own high performance graphics for GUI (I basically stole Motif visual design with some adaptations) and proprietary preemptive multithreading. C programs venturing into same realm were not really any faster and had to use assembly anyways for performance critical parts.

Yes, that is how I used it as well.

To share a similar anecdote I was so proud of myself having created my own mouse support unit, that I then used to plug into a couple of BGI based applications.

Indeed it did, and it was magic. Just an asm: declaration and off you went, kitchen sink and access to variables included. I wrote some fairly nifty graphics stuff that way in the early Delphies, mid to late nineties. Great fun, still miss it.

Smiling at comments from afar 🙂 TurboP/Delphi probably has made companies more money commercially than Lisp or below top 8 in tiobe (haven’t looked at it lately). The biggest codebase I’ve ever worked on was in Delphi – all 8 million lines. Anybody every worked on a private company codebase that big? (Fun with svn blames).

Source Article