jcranmer 3 days ago

Whenever you have a proposal like this that essentially laments that we don't have the nice shiny thing, it's quite useful to take a few step backs and ask "well, why don't we have it," and try to consider some of the problems with it. This paper is no exception--advocating for a return to a Smalltalk/LISP machine approach as opposed to Unix; why don't we have those things?

The facile answer is the "no true Scotsman"--we've never really had a true, whole-system approach to this, only approximations, and if only we had that true, whole-system approach, everyone would fall in love with it. I'm not persuaded. We were forced to use a Smalltalk environment for one of our classes, and the reaction was pretty universally negative. Not because it was a lamentably incomplete instantiation of a wonderful thing, but because if you screwed up--as anybody is wont to do--you would just absolutely and sometimes irreversibly hose the entire system. Imagine trying to poke at an error stack trace resulting in the only editor you could use to fix it breaking with the only means of recovery going back (outside of the all-encompassing environment, mind you) to an older version.

So I think that's the real reason these systems never took off: "everything works together" too often becomes "everything breaks together." Hotpatching code sounds nice, but it now means you get to deal with stuff like "oops, the global cache has both the old and new version stuffed in it and something else is hosed as a result." Replacing "sqrt" with "mysqrt" everywhere also implies the ability to replace "open" with "vfsopen" everywhere... only to accidentally map it to a VFS that has no files and now you can't open any files on your system from any application.

  • o11c 3 days ago

    For me the problem has never been about outright breakage.

    For IDEs, I have often observed: the whole is less than the sum of the parts. Trying to cram them all into one UI ends up removing a lot of capabilities, and it is always more difficult to add additional parts after the fact (since we should never assume all tools can be up-to-date with a changing world), compared to simply using the tools individually.

    And IPEs are far more complicated than IDEs.

  • nonrandomstring 3 days ago

    > So I think that's the real reason these systems never took off: "everything works together" too often becomes "everything breaks together."

    Beautifully put.

  • pjmlp 2 days ago

    I found confort in Java and .NET ecosystems, it is not the whole package, but the closest we as industry apparently managed to get closer to Xerox PARC ideas.

  • igouy 2 days ago

    > Imagine trying to poke at an error stack trace resulting in the only editor you could use to fix it breaking with the only means of recovery going back (outside of the all-encompassing environment, mind you) to an older version.

    Imagine being able to go back to an older version, and then being able to re-play the log of actions you'd taken to just before you broke the editor.

    The Smalltalk changes log let's you do that!

    https://cuis-smalltalk.github.io/TheCuisBook/The-Change-Log....

    > sometimes irreversibly hose the entire system

    No, not "irreversibly".

    Not without the kind-of malicious intent that would deliberately delete and overwrite back-up files!

thom 3 days ago

Worth reading the whole document here instead of making assumptions based on the title. In some ways the authors would recognise almost all current developer tools as integrated environments, even those we consider editors compare to those we consider IDEs like the JetBrains end of the spectrum.

But in other ways this is about the failure modes of the UNIX philosophy. When everything is a file or streams of text, the authors point out that all you have are containers, not actual _things_. Your source code isn’t a first class citizen, and the doc mentions in passing a content addressable hashing scheme that is reminiscent of git (or the Unison language). The second part of the article might be reminiscent of PowerShell.

Ultimately modern developers are very well served, but UNIX is in many ways a commodity layer operating beneath fairly monolithic development environments. But in an imagined UNIX future here you’d be able to dynamically edit your libc, and effortlessly connect and debug across multiple UNIX tools like awk, while incrementally compiling code and loading it into a running image with seamless FFI across multiple languages - a Smalltalk or Lisp machine but constructed of UNIX tools like C and the command line. That never really materialised, I certainly don’t see it in the disjointed workflows of the average vim-in-tmux developer. I don’t know if this is lamentable or not but it’s an interesting angle.

  • myaccountonhn 2 days ago

    There are some tools that try to be more “unix-y”. Some that come to mind are Kakoune and Sway. It is quite interesting how you can leverage your unix skills when using those tools to quickly extend them with new behaviors.

    I also wonder why servers don’t leverage IPC (or dbus) for communication. It feels like Linux is a glorified docker container host for cloud services that do what Linux could already do on its own.

  • pjmlp 2 days ago

    There are still enough features that IDEs like the JetBrains and Visual Studio, fail to provide versus the experiences provided across all Xerox PARC environments.

    Graphical based REPLs with auto correction, able to interact with the whole development environment (shells like Powershell as you mention, and Fish are the closest, and don't take into account a surrounding IDE), the whole stack being written in memory safe languages, minus the underlying bottom layer primitives,

    Solaris with NeWS, NeXTSTEP/OS X/macOS, and Inferno are probably the closest to the whole set of ideas, with an UNIX linage.

stergios 3 days ago

I worked on a very capable unix based CASE system for Athena Systems in 88 & 89. There's still some articles about it online like here https://www.worldradiohistory.com/Archive-Electronics/80s/88... (complete with screen shots) on page 37. What I recall being the uphill battle was fighting the peace dividend granted by the cold war ending. Defense contractors, our primary market, we’re in cost cutting mode for a decade.

  • OhMeadhbh 3 days ago

    The 80s were a magical time. We still believed tools could help us make better systems. Then in the 90s everyone decided they were hardcore coders who didn't need crutches. And then ESR insisted that a bazillion people glancing at a couple of lines of code at a time would fix any bug. And in the 2000s someone seriously misunderstood XP and said that "Agile" meant you didn't need to have a spec or a design because you were writing code that was easy to change and when you figured out what you were trying to do you just refactored your code and viola! the MVP emerged fully formed out of the forehead of Zeus! In the 2010s we stopped doing unit tests because no one read Kent Beck's book and complained that they didn't know what a unit test was and it seems like a bad idea to write tests for code that hasn't been written (and they were probably right because by this time no one knew what the product was supposed to do and we're just going to open source it so our community will fill in the functional bits that we didn't have the time to get around to.) And by now there's a single function call in Python 3.12 that does exactly what you want to do so our entire startup is just an API wrapper around python.

    People wonder why I still have a VAX and a PERQ and a TI Explorer at home that I program for fun.

    Which is my way of saying... hmm... the link above didn't include a reference to which page to look at but searching for "Athena" led me to page 37 and yes, that does look very nice.

    • pjmlp 2 days ago

      Back when we wouldn't mind rewriting stuff in Assembly across machines, to make an application "portable" instead of shipping a Web browser with the application.

      • 01HNNWZ0MV43FF 2 days ago

        I wouldn't mind writing text rendering myself but as far as I know nobody is paying for that

        • pjmlp 2 days ago

          I imagine DirectX team might have some open positions on DirectWrite for example.

somat 2 days ago

The unix operating environment is integrated via the filesystem. a simple, easy to use hierarchical database that every object can slot into and access.

in fact it is so simple and easy to use. it has become ubiquitous and regarded as common lesser thing, and we go searching for harder to use, less globally useful integration methods.

stergios 3 days ago

I believe one of the authors L. Peter Deutsch, is the original developer of Ghostscript.

api 3 days ago

This immediately makes me think about a path not (widely) taken in software: the path of higher level languages and execution environments with strong first class support for composability, easy message passing, reflection, automatic memory management, etc., across the entire system and even across networks.

Examples include Smalltalk, LISP machines, the JVM, and the CLR. There are probably others. Edit: is Erlang one of these? I seem to recall it having some of this.

If you've never operated in one of these environments, it will be hard to imagine what they bring to the table. Here's an incomplete list:

- All errors cause traceable exceptions and everything can be debugged.

- Statistics and diagnostics are a built-in feature of the runtime environment. Everything either always is or can easily be instrumented and profiled. The developer usually has to do nothing to allow this. Extremely detailed maps of memory and CPU usage can be produced for anything that is running.

- Software can be hot upgraded at the object/module level in running applications and services. If you design things to take advantage of this, full restarts can be reserved only for very major upgrades of the runtime or hardware.

- Everything is memory safe. Buffer overflows and similar "segmentation fault" errors are almost impossible.

- Composability -- applications and modules incorporating one another and building on one another -- is reasonably easy and commonplace.

- Modularity and code reuse is way easier.

- Inter-process communication (IPC) and the network equivalent (RPC) are easy, sometimes automatic and trivial.

- Function definitions and calling conventions are rich, typed, and support runtime reflection.

- Software can easily and efficiently pass complex data structures across module and even program boundaries with little to no API glue code.

- Module/object and program state can be serialized, moved, stored, deserialized, restored, etc., even across machines... or in some cases across architectures!

I'm sure there are more highlights. Those are the ones I can recall now.

It's not that these systems are totally dead. The JVM and the CLR are used heavily in business. But they are absolutely not trendy and have fallen out of favor among most "hackers" and new startups/projects in favor of the modern Unix way.

The modern Unix way, also known as "cloud native," might be described as: balls of random shit bundled up as containers or VM images communicating over bespoke APIs using inefficient text-based protocols over HTTP. There are of course many variations of this, but this is the basic theme.

I suppose this paradigm gives you some of what I listed above, but it does so in a very bespoke, inefficient, and labor intensive way. You end up having to create and maintain complicated API glue layers, implement connectors between applications that speak endless bespoke protocols, and of course every application runs in a completely different runtime (or no runtime at all) and can in no way be investigated using any set of common debugging or diagnostic tools.

This is definitely a case of "worse is better." Having used environments like the CLR and JVM and played with the likes of Smalltalk, it seems obvious to me that higher level environments save massive amounts of programmer labor and pain, but we don't like them. Why?

I find it hard to answer this question, but like many cases of "worse is better" it seems like a revealed preference. The better thing just never "takes."

Edit: I've got some speculations. It could be some combination of:

- Worse isn't better, but worse is free and unencumbered. Many of the better environments cost money or come with some other license or limitations.

- It could be that programmers are "macho" and don't like the fact that these environments are easier. Real Men(tm) code in C with no bounds checking. I sometimes see this attitude on display with regard to Rust. "We don't need safety!"

- The clunkiness and inefficiency of "weakly coupled wads of shit" encourages profligate waste of cloud resources and the use of managed services to avoid having to manually babysit this stuff, which is very profitable for vendors and cloud providers.

- By making it easier to build and sustain complex systems, these environments tend to foster runaway complexity growth. Look at abominations like enterprise Java for examples. The clunkiness and fragility of "worse is better" limits complexity, which turns out to be a good thing.

- The examples I listed were all too closely bound to one language or paradigm. Maybe something like WASM will finally make this mainstream?

  • mistrial9 3 days ago

    interesting read but lacking fairness, since "weakly coupled wads of s**" is all at once, pejorative, lacking useful meaning, and strongly underestimating the advances in software over 20+ years that are in said wads.. Architects don't like a lot of things..

    • api 2 days ago

      Yeah, probably too harsh... I was thinking of Docker/containers when I wrote that, which really is "tar up your machine" as a software distribution mechanism the way most people use it.

      Most of what I've seen in "cloud native" is a horror under the hood.

  • pjmlp 2 days ago

    Hence why I keep around Java, .NET, nodejs (could be better, but whatever) as much as possible, and yes I would place Erlang on the same group.

  • theamk 3 days ago

    I don't think you can blame this on vendors / cloud providers - many of the worst offenders there use JVM or CLR (looking at you, Atlassian!)

    But I think your other points are pretty good, and match my experience:

    - The "free and unencumbered" became very important in the internet era. I used to program in Delphi, which is a great visual IDE. I stopped this once I've realized that if I keep going, none of my friends can use my source code.

    - "macho"-ness, or how I'd call it, "barrier to entry" definitely has a role. For example Java is the language where I saw the most "catch (Exception e) {;}" clauses - which is one of the worst things ever you can do to users. The language itself is good, but the culture around it is often not.

    I'd also add that all those fancy systems are strongly associated with "single computer, desktop app" setups. Once I've got used to running servers, many of the advantages just disappear:

    "everything can be debugged?" - your server has no graphical shell, and may not even have JDK installed. If all your experience was local, that's a major problem right there. But OK, you've figured out remote debugging. But it is not very useful now, as you cannot stop the program which is serving live requests; and even if you do, your breakpoints get triggered from different requests. And remote clients time out while you study stack. So might as well write your next server program in Perl + CGI, at least you'd be able to add debug prints easily.

    (Yes, in the ideal case you'd a working dev environment and not do it live on prod. This was pretty rare though in smaller sites, especially in the 1990's-2000's)

    The same can be said about "Software can be hot upgraded" - that seems insane, that's how you get a unique snowflake machine which everyone is afraid to touch. Software should be restarted often, to make sure it's reset to a predictable state.

    And that "Everything is memory safe" means speed penalty, which people actually used to care about. I remember when I got my first webcam, with it's 640x480 resolution. Was I going to be coding the image processing in Java? No way, I wanted more than 1 FPS. It was C or maybe Pascal. What about that game? It's not going to be in Java either....

    (And CLR's beginnings were damaged by the strong MS Windows associations, with all the associated fragility of "I've checked some checkbox somewhere one of those 250 control panel tabs and now my web server no longer works. No, I don't remember which one is it". I know it's no longer the case _today_ but we are talking about the past...)

    And of course being bound to one language was not helping either. If I use Java RPC, only another JVM ecosystem language can use it; but if use text-based HTTP, I can use almost any language ever made.

    Given all of those reasons, it am not surprised at all that JVM/CLR are used for large business apps, while smaller things are written in other languages. And that means all the cool stuff is going to be C++/C/Rust/Python, while only the boring enterprise apps would be Java.

    • pjmlp 2 days ago

      > Given all of those reasons, it am not surprised at all that JVM/CLR are used for large business apps, while smaller things are written in other languages. And that means all the cool stuff is going to be C++/C/Rust/Python, while only the boring enterprise apps would be Java.

      There is enough cool stuff on Android, and embedded with Java.

      Also although not Java level, back in the 1990's there were already two attempts to have Smalltalk experience for C++, Lucid Energize C++ (after their Common Lisp pivot), and Visual Age for C++ v 4.0 (which introduced a image like concept for C++ development).

      Both failed due to hardware resources, and the related cost to have such a capable workstation.

      However, the LSP idea traces back to systems like Cadillac used by Lucid Energize C++, not surprising given that the main Visual Studio Code (nee Monaco) is Enrich Gamma from Eclipse fame.

      Delphi and C++ Builder still offer to this day, similar kind of IDE experience, ironically Visual C++ was never as RAD as C++ Builder, nor as visual.

      Additionally, Clion with clangd and Live++, offers some of the same experience that Energize C++ and Visual Age for C++ tried to make common in the 1990's.

      • theamk 2 days ago

        is embedded still using Java? I remember all the talk about java-on-a-chip in early 2000's (remember tini, "Java-On-A-Chip" from Dallas Semi?), and all the java on feature phones, but it seems all died out by now.

        I think the only place which has embdded + java now is one of those desktop/server class PC with gigs of RAM, and multiple gigahertz of CPU, and full-blown linux inside.

    • api 2 days ago

      Good points. A few responses:

      > "Everything is memory safe" means speed penalty

      Not much, and sometimes none if the optimizer is smart. See: Rust, Go. The JVM and CLR are not slow either, though they are a little less efficient than the former for legacy reasons... and of course like all benchmarks it will depend on the use case and code.

      > The same can be said about "Software can be hot upgraded" - that seems insane

      That's something that has to be engineered properly. I saw it done quite a long time ago via distinct versions distributed and then loaded in place of currently loaded .jar files (Java). Things stayed consistent. This one does require some forethought and developer discipline to do properly but it lets you build insanely high uptime systems without the clunkiness and waste of current approaches. I guess the current approach works too, but this is just so elegant.

      I agree about this stuff being PC-centric, but that's a function of the era in which it was developed. Nothing intrinsic about these systems prohibits their use in a cloudy environment. They'd just be deployed and used a little differently.

  • betimsl 3 days ago

    > Why?

    Huge interest in the field. Schools unable to teach the craft properly.

    • xraider42 3 days ago

      I kinda concur and i think there is also a business needs mismatch. OP's approach requires a very holistic and integrated view on things, a lot of experience and academic knowledge. It would probably be a better approach, but it requires upping the requirements who can be a SE and how its taught, software might not be a commodity in this world.

      The beauty of today's "cloud native" approach is that everything is basically just a layer on top of another layer and you can be completely ignorant of everything below and make stuff that even if inefficient, works. We can churn out a lot of people and yes, they will make crap, but crap that satisfies the needs of the business and creates real or perceived value for it. Arguments can be made that long-term its a loss since the codebase will rot faster, but everybody responsible will be long gone and don't care, they don't have incentives for this.

      A week ago I interviewed a guy who delivered multiple projects over his 10 year career in large companies and he had no idea what's multithreading or concurrency. The academic in me weeps, but the manager/engineer in me was impressed that you can be so completely ignorant of all lower layers, but still produce said value. Would it be possible in the noted holistic approach ?