Surprising deep and level headed analysis. Jai intrigues me a lot, but my cantankerous opinion is that I will not waste my energy learning a closed source language; this ain’t the 90s any more.
I am perfectly fine for it to remain a closed alpha while Jonathan irons out the design and enacts his vision, but I hope its source gets released or forked as free software eventually.
What I am curious about, which is how I evaluate any systems programming language, is how easy it is to write a kernel with Jai. Do I have access to an asm keyword, or can I easily link assembly files? Do I have access to the linker phase to customize the layout of the ELF file? Does it need a runtime to work? Can I disable the standard library?
Iirc, pretty sure jblow has said he's open sourcing it. I think the rough timeline is: release game within the year, then the language (closed-source), then open source it.
Tbh, I think a lot of open source projects should consider following a similar strategy --- as soon as something's open sourced, you're now dealing with a lot of community management work which is onerous.
it's not even contributions, but that other people might start asking for features, discuss direction independently (which is fine, but jblow has been on the record saying that he doesn't want even the distraction of such).
The current idea of doing jai closed sourced is to control the type of people who would be able to alpha test it - people who would be capable of overlooking the jank, but would have feedback for fundamental issues that aren't related to polish. They would also be capable of accepting alpha level completeness of the librries, and be capable of dissecting a compiler bug from their own bug or misuse of a feature etc.
You can't get any of these level of control if the source is opened.
You can simply ignore them. This worked for many smaller programming languages so far, and there exist enough open source softwares that are still governed entirely by a small group of developers. The closedness of Jai simply means that Blow doesn't understand this aspect of open source.
Ignoring people is by itself tedious and onerous. Knowing what I do about him and his work, and having spent some time watching his streams, I can say with certainly that he understands open source perfectly well and has no interest -- nor should he -- in obeying any ideology, yours for instance, as to how it's supposed to be handled, if it doesn't align with what he wants. He doesn't care whether he's doing open source "correctly."
Yeah, he is free to do anything as he wants, but I'm also free to ignore his work due to his choice. And I don't think my decision is unique to me, hence the comment.
Maybe there's aspirations to not be a "smaller programming language" and he'd rather not cause confusion and burn interested parties by having it available.
Releasing it when you're not ready to collect any upside from that decision ("simply ignore them") but will incur all the downside from a confused and muddled understanding of what the project is at any given time sounds like a really bad idea.
It seems to be there's already enough interest for the closed beta to work.
A lot of things being open sourced are using open source as a marketing ploy. I'm somewhat glad that jai is being developed this way - it's as opinionated as it can be, and with the promise to open source it after completion, i feel it is sufficient.
Yep. A closed set of core language designers who have exclusive right to propose new paths for the language to take while developing fully Free and in the open is how Zig is developing.
That kind of means jack squat though. Jai is an unfinished *programming language*, Sqlite is an extremely mature *database*.
What chii is suggesting is open sourcing Jai now may cause nothing but distractions for the creator with 0 upside. People will write articles about its current state, ask why it's not like their favorite language or doesn't have such-and-such library. They will even suggest the creator is trying to "monopolize" some domain space because that's what programmers do to small open source projects.
That's a completely different situation from Sqlite and Linux, two massively-funded projects so mature and battle-tested that low-effort suggestions for the projects are not taken seriously. If I write an article asking Sqlite to be completely event-source focused in 5 years, I would be rightfully dunked on. Yet look at all the articles asking Zig to be "Rust but better."
I think you can look at any budding language over the past 20 years and see that people are not kind to a single maintainer with an open inbox.
We can muse about it all day, the choice is not ours to make. I simply presented the reality that other succcessful open source projects exist that were also in 'early development state'.
There are positives and negatives to it, I'm not naive to the way the world works. People have free speech and the right to criticise the language, with or without access to the compiler and toolchain itself, you will never stop the tide of crazy.
I personally believe that you can do opensource with strong stewardship even in the face of lunacy, the sqlite contributions policy is a very good example of handling this.
Closed or open, Blow will do what he wants. Waiting for a time when jai is in an "good enough state" will not change any of the insanity that you've mentioned above.
I don't have a stake in this particular language or its author, I was just discussing the pros and cons of the approach.
> Waiting for a time when jai is in an "good enough state" will not change any of the insanity that you've mentioned above.
I outlined some reasons why I think it would, and I think there's good precedent for that.
> the choice is not ours to make
I never said it was.
> People have free speech
I don't think I argued people don't have free speech? This is an easily defensible red herring to throw out, but it's irrelevant. People can say whatever they want on any forum, regardless of the projects openness. I am merely suggesting people are less inclined to shit on a battle-tested language than a young, mold-able one.
Interesting, they've softened their stance. Today, it reads
> In order to keep SQLite in the public domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from people who have not submitted an affidavit dedicating their contribution into the public domain.
But it used to read
> In order to keep SQLite in the public domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from unknown persons.
Seems to be hardened not softened: a person who has submitted an affidavit dedicating code fo the public domain is at least minimally known, but a person may be known without submitting an affidavit, so the new form is strictly a stronger restriction than the old one.
You say this now but between 2013 - around 2023, The definition of Open source is that if you dont engage with the community and dont accept PRs it is not open source. And people will start bad mouth the project around the internet.
Linux started before 2013? So did SQLite? And both are not even comparable as they were the dominant force already and not a new started project.
And in case you somehow thinks I am against you. I am merely pointing out what happened between 2013 - 2023. I believe you were also one of the only few on HN who fought against it.
Open source softwares with closed development model have existed for a very long time so that should have been considered no matter it was considered as open source or not. (And I think it was around 2000s, not 2010s, when such misconception was more widespread.)
I don't think the issue is just contributions. It's the visibility.
When you're a somewhat famous programmer releasing a long anticipated project, there's going to be a lot of eyes on that project. That's just going to come with hassle.
Well, it is the public internet, people are free to discuss whatever they come across. Just like you're free to ignore all of them, and release your software Bellard-style (just dump the release at your website, see https://bellard.org/) without any bug tracker or place for people to send patches to.
Having a lot of eyes on it is only a problem if you either have a self-esteem problem and so the inevitable criticism will blow you up or, you've got an ego problem and so the inevitable criticism will hurt your poor fragile ego. I think we can be sure which of these will be a problem for Jonathan "Why didn't people pay $$$ for a remaster of my old game which no longer stands out as interesting?" Blow.
yep and JBlow is a massive gatekeeper who discourages people from learning programming if he doesn't believe they can program the way he thinks a programmer should. He is absolutely running from any criticism that will hurt his enormous yet incredibly fragile ego.
The hate he is receiving is bizarre. It takes guts to be opinionated - you are effectively spilling your mind (and heart) to people. And yet some people will assume the worst about you even if it's an exact inversion of the truth.
It's not a "misconception". Open source implying open contributions is a very common stance, if not even the mainstream stance. Source availability is for better or for worse just one aspect of open source.
It is a misconception. Open source doesn’t mean the maintainer needs to interact with you. It just means you can access the code and do your own fork with whatever features you like.
Open Source definition ( https://opensource.org/osd ) says nothing about community involvement or accepting contributions. It may be common, but it is not necessary, required or even hinted at in the license.
For many it is very much a philosophy, a principle, and politics. The OSI is not the sole arbiter of what open source is, and while their definition is somewhat commonly referred to, it is not the be all end all.
> Any software is source-available in the broad sense as long as its source code is distributed along with it, even if the user has no legal rights to use, share, modify or even compile it.
You have the legal right to use, share, modify, and compile, SQlite's source. If it were Source Available, you'd have the right to look at it, but do none of those things.
IMO the main thing they're risking by open sourcing it is adoption. Keeping it closed source is a pretty clear sign to the rest of the world that the language is not ready for widespread adoption. As soon as you open source it, even if you mark it as alpha, you'll end up with people using the language, and breaking changes will at that point break people's code.
Keeping things closed source is one way of indicating that. Another is to use a license that contains "THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED [...]" and then let people make their own choices. Just because something is open source doesn't mean it's ready for widespread adoption.
You're describing pretty much every popular open source license here, including the Linux kernel(GPLv2). This doesn't set the expectation that things can and will break at any time. That's also not the approach maintainers take with most serious projects.
There is a lot of experimentation going on as well. Few months ago 2 new casting syntaxes were added for users to evaluate. The plan is to keep only one and remove the others before release.
An argument can easily be make that Jai could have been released as closed-source, some time ago. Many fans and the curious, just want to be able to get their hands on it.
Jon is not going to stop public reaction nor will Jai be perfect, regardless of when he releases. At least releasing sooner, allows it to keep momentum. Not just generated by him, but by third parties, such as books and videos on it. Maybe that's where Jon is making a mistake. Not allowing others to help generate momentum.
That’s what I meant by forked. If Jonathan wants to keep his branch closed source, that’s fine, as long as he cuts a release, gives it a GNU license and calls it OpenJai or something. He doesn’t have to deal with the community, somebody will do that for him.
Apparently not only do the 90's approach still work pretty well when the language comes with a piece of green coloured hardware, all the ongoing returns to 90's licensing models prove that the free beer approach isn't working when the goal is to build a sustainable business of out the technology.
Thanks. I haven’t played with D since it also had a closed source implementation (10+ years ago) and never kept up with its newer development. I should check it out again.
I don't get what's up with the runtime hysteria. All languages have a runtime maybe except for assembler. And linux kernel itself is infamous for being not C by a large margin. And in general remove something important from any program and it will stop working.
If you do embedded work, you often want to be in total control of all memory allocations. So it is good to know that the compiler will not produce some invisible
heap allocations and there is a useful subset of the standard libray that does not use them either.
There is this streamer that does a lot of interesting language exploring on his own. I don't say you will find all the answers to your questions, but I think you will get a good sense of what you can or cannot do in jai : https://www.youtube.com/results?search_query=Tsoding+jai
Tsoding is great. Don’t be put off by the memelord persona, he’s a genuinely smart guy always exploring some interesting language or library, or reimplementing something from scratch to truly understand it.
One can be put off by whatever one is put off by. I've gotten to the point where I realized that I don't need to listen to everyone's opinion. Everyone's got some. If one opinion is important, it will like the shared by more than one person. From that it follows that there's no need to subject oneself to specific people one is put off by. Or put another way: if there's an actionable critique, and two people are stating it, and one is a dick and the other isn't, I'll pay attention to the one who isn't a dick. Life's to short to waste it with abrasive people, regardless of whether that is "what is in their heart" or a constructed persona. The worst effect of the "asshole genius" trope is that it makes a lot of assholes think they are geniuses.
Personally, I’d rather be the kind of person who could have evaluated Semmelweis’s claims dispassionately rather than one who reflexively wrote him off because he was strident in his opinions. Doctors of the second type tragically shortened the lives of those under their care!
Being abrasive is different from being a "memelord." The former is excusable and socially valuable and politically healthy, even essential. The latter is immature, antisocial, and socially and politically corrosive.
If it's a persona, then he's at best a performer and entertainer pandering to an audience that enjoys or relates to immature, insufferable people. If it isn't a persona, then he's just an immature, insufferable person.
No, thank you. Either way, the result is psychologically, socially, and politically corrosive and typically attracts a horrendous, overall obnoxious audience.
Is he actually doing that or is he doing what Casey Muratori's doing with Handmade Hero and taking almost a decade to implement a debug room for a generic top-down Zelda clone?
I have my doubts with Jai, the fact that Blow & co seems to have major misunderstandings with regards to RAII doesn't lend much confidence.
Also a 19,000 line C++ program(this is tiny) does not take 45 minutes unless something is seriously broken, it should be a few seconds at most for full rebuild even with a decent amount of template usage.
This makes me suspect this author doesn't have much C++ experience, as this should have been obvious to them.
I do like the build script being in the same language, CMake can just die.
The metaprogramming looks more confusing than C++, why is "sin"/"cos" a string?
Based on this article I'm not sure what Jai's strength is, I would have assumed metaprogramming and SIMD prior, but these are hardly discussed, and the bit on metaprogramming didn't make much sense to me.
> Also a 19,000 line C++ program(this is tiny) does not take 45 minutes unless something is seriously broken
Agreed, 45 minutes is insane. In my experience, and this does depend on a lot of variables, 1 million lines of C++ ends up taking about 20 minutes. If we assume this scales linearly (I don't think it does, but let's imagine), 19k lines should take about 20 seconds. Maybe a little more with overhead, or a little less because of less burden on the linker.
There's a lot of assumptions in that back-of-the-envelope math, but if they're in the right ballpark it does mean that Jai has an order of magnitude faster builds.
I'm sure the big win is having a legit module system instead of plaintext header #include
Yeah it's weird but the author of this post claiming that defer can replace RAII kinda suggests that. RAII isn't just about releasing the resource you acquired in the current scope in the same scope. You can pass the resource across multiple boundaries with move semantics and only at the end when it's no longer needed the resources will be released.
The author of the post claims that defer eliminates the need for RAII.
Well, goto also eliminates the "need" but language features are about making life easier, and life is much easier with RAII compared to having only defer.
It makes things easier. Usually the move constructor (or move assignment operator) will cause the moved-from object to stop being responsible for releasing a resource, moving the responsibility to the moved-to object. Simplest example: move- construct unique-ptr X from unique-ptr Y. When X is destroyed it will free the memory, when Y is destroyed it will do nothing.
So you can allocate resource in one function, then move the object across function boundaries, module boundaries, into another object etc. and in the end the resource will be released exactly once when the final object is destroyed. No need to remember in each of these places along the path to release the resource explicitly if there's an error (through defer or otherwise).
I agree that it makes some things easier (at the expense of managing constructors/destructors), I'm disputing the blanket assertion that it's superior to manual management, in the context of Jai (and Odin). You're also introducing a reference count, but that's besides the point.
In Jai/Odin, every scope has default global and temp allocators, there's nothing stopping you from transferring ownership and/or passing pointers down the callstack. Then you either free in the last scope where the pointer lives or you pick a natural lifetime near the top of the callstack, defer clear temp there, and forget about it.
You may also want to pass a resource through something like a channel, promise/future pair or similar. So it's not just down/up the callstack, sometimes it's "sideways". In those cases RAII is a life savior. Otherwise you have to explicitly remember about covering all possibilities:
- what if resource never enters the channel
- what if it enters the channel but never gets retrieved on the other side
- what if the channel gets closed
- what if other side tries to retrieve but cancels
Honestly I concur. Out of interest in what sort of methods they came up with to manage memory, I checked out the language's wiki, and not sure if going back to 1970s C (with the defer statement on top) is an improvement. You have to write defer everywhere, and if your object outlives the scope of the function, even that is useless.
I'm sure having to remember to free resources manually has caused so much grief, that they decided to come up with RAII, so an object going out of scope (either on the stack, or its owning object getting destroyed) would clean up its resources.
Compared to a lot of low-level people, I don't hate garbage collection either, with a lot of implementations reducing to pointer bumping for allocation, which is an equivalent behavior to these super-fast temporary arenas, with the caveat that once you run out of memory, the GC cleans up and defragments your heap.
If for some reason, you manage to throw away the memory you allocated before the GC comes along, all that memory becomes junk at zero cost, with the mark-and-sweep algorithm not even having to look at it.
I'm not claiming either GC or RAII are faultless, but throwing up your hands in the air and going back to 1970s methods is not a good solution imo.
That being said, I happen to find a lot that's good about Jai as well, which I'm not going to go into detail about.
This take is equally bizarre. Most languages have an addition semantic. Most languages do not have RAII. That's, by and large, a C++ thing. Jai does NOT have RAII. So, again, why would anybody care what his opinion on RAII is?
> The net effect of this is that the software you’re running on your computer is effectively wiping out the last 10-20 years of hardware evolution; in some extreme cases, more like 30 years.
As an industry we need to worry about this more. I get that in business, if you can be less efficient in order to put out more features faster, your dps[0] is higher. But as both a programmer and an end user, I care deeply about efficiency. Bad enough when just one application is sucking up resources unnecessarily, but now it's nearly every application, up to and including the OS itself if you are lucky enough to be a Microsoft customer.
The hardware I have sitting on my desk is vastly more powerful that what I was rocking 10-20 years ago, but the user experience seems about the same. No new features have really revolutionized how I use the computer, so from my perspective all we have done is make everything slower in lockstep with hardware advances.
I understand the attitude but I think it misses a few aspects.
We have far more isolation between software, we have cryptography that would have been impractical to compute decades ago, and it’s used at rest and on the wire. All that comes at significant cost. It might only be a few percent of performance on modern systems, and therefore easy to justify, but it would have been a higher percentage a few decades ago.
Another thing that’s not considered is the scale of data. Yes software is slower, but it’s processing more data. A video file now might be 4K, where decades ago it may have been 240p. It’s probably also far more compressed today to ensure that the file size growth wasn’t entirely linear. The simple act of replaying a video takes far more processing than it did before.
Lastly, the focus on dynamic languages is often either misinformed or purposefully misleading. LLM training is often done in Python and it’s some of the most performance sensitive work being done at the moment. Of course that’s because the actual training isn’t executing in a Python VM. The same is true for so much of “dynamic languages” though, the heavy lifting is done elsewhere and the actual performance benefits of rewriting the Python bit to C++ or something would often be minimal. This does vary of course, but it’s not something I see acknowledged in these overly simplified arguments.
Requirements have changed, software has to do far more, and we’re kidding ourselves if we think it’s comparable. That’s not to say we shouldn’t reduce wastage, we should! But to dismiss modern software engineering because of dynamic languages etc is naive.
> The hardware I have sitting on my desk is vastly more powerful that what I was rocking 10-20 years ago, but the user experience seems about the same.
Not even.
It used to be that when you clicked a button, things happened immediately, instead of a few seconds later as everything freezes up. Text could be entered into fields without inputs getting dropped or playing catch-up. A mysterious unkillable service wouldn't randomly decide to peg your core several times a day. This was all the case even as late as Windows 7.
At the same time it was also that you typed 9 characters in an 8 characters field and you p0wn3e the application.
>Text could be entered into fields without inputs getting dropped or playing catch-up
This has been a complaint since the DOS days that has always been around from my experience. I'm pretty sure it's been industry standard from its inception that most large software providers make the software just fast enough the users don't give up and that's it.
Take something like notepad in opening files. Large files take forever. Yet I can pop open notepad++ from some random small team and it opens the same file quickly.
Jai's perpetual closed beta is such a weird thing... On the one hand, I sort of get that the developers don't want to waste their time and attention on too many random people trying to butt in with their ideas and suggestions. On the other hand, they are thereby wasting the time and attention of all the people who watched the development videos and read the blog posts, and now can do basically nothing with that knowledge other than slowly forget it. (Except for the few who take the ideas and incorporate them into their own languages).
The reality of a project like this is that to get it right (which is by the creator's standards, no one else's) takes time. Add on top of that Blow and Thekla are building games with this to dogfood it which takes time, too.
Sadly, there exists a breed of developer that is manipulative, obnoxious, and loves to waste time/denigrate someone building something. Relatively few people are genuinely interested (like the OP) in helping to develop the thing, test builds, etc. Most just want to make contributions for their Github profile (assuming OSS) or exercise their internal demons by projecting their insecurities onto someone else.
From all of the JB content I've seen/read, this is a rough approximation of his position. It's far less stressful to just work on the idea in relative isolation until it's ready (by whatever standard) than to deal with the random chaos of letting anyone and everyone in.
This [1] is worth listening to (suspending cynicism) to get at the "why" (my editorialization, not JB).
Personally, I wish more people working on stuff were like this. It makes me far more likely to adopt it when it is ready because I can trust that the appropriate time was put in to building it.
I get that. But if you want to work in relative isolation, would it be too much to ask to not advertise the project publicly and wax poetic about how productive this (unavailable) language makes you? Having had a considerable interest in Jai in the past, I do feel a little bit cheated :) even though I realize no binding promises have been made.
As well as "a few early presentations" (multiple hour+ conference talks) Jon keeps appearing on podcasts, and of course he's there to talk about this unavailable programming language although sometimes he does also talk about The Witness or Braid.
It's a common thing in programming language design and circles where some people like to form little cults of personality around their project. Curtis Yarvin did that with his Urbit project. V-Lang is another good example. I consider Elm an example as well.
They get a few "true believer" followers, give them special privileges like beta access (this case), special arcane knowledge (see Urbit), or even special standing within the community (also Urbit, although many other languages where the true believers are given authority over community spaces like discord/mailing list/irc etc.).
I don't associate in these spaces because I find the people especially toxic. Usually they are high drama because the focus isn't around technical matters but instead around the cult leader and the drama that surrounds him, defending/attacking his decisions, rationalizing his whims, and toeing the line.
Like this thread, where a large proportion is discussion about Blow as a personality rather than the technical merit of his work. He wants it that way, not so say that his work doesn't have technical merit, but that he'd rather we be talking about him.
One thing I want to add to the other (so far) good responses: They also seem to build Jai for a means to an end, which is: they are actively developing a game engine with it (to be used for more than one project) and a game, which is already in advanced stages.
If you consider a small team working on this, developing the language seriously, earnestly, but as a means to an end on the side, I can totally see why they think it may be the best approach to develop the language fully internally. It's an iterative develop-as-you-go approach, you're writing a highly specific opinionated tool for your niche.
So maybe it's best to simply wait until engine + game are done, and they can (depending on the game's success) really devote focus and time on polishing language and compiler up, stabilizing a version 1.0 if you will, and "package" it in an appropriate manner.
Plus: they don't seem to be in the "promote a language for the language's sake" game; it doesn't seem to be about finding the perfect release date, with shiny mascot + discord server + full fledged stdlib + full documentation from day one, to then "hire" redditors and youtubers to spread the word and have an armada of newbie programmers use it to write games... they seem to much rather see it as creating a professional tool aimed at professional programmers, particularly in the domain of high performance compiled languages, particularly for games. People they are targeting will evaluate the language thoroughly when it's out, whether that's in 2019, 2025 or 2028. And whether they are top 10 in some popularity contest or not, I just don't think they're playing by such metrics. The right people will check it out once it's out, I'm sure. And whether such a language will be used or not, will probably, hopefully even, not depend on finding the most hyped point in time to release it.
There's a school of thought that correctly states that in that case it is very easy to cause expensive drop behavior to be ran for each element in the vector where a faster batch approach could instead be done, which is doable if not encouraged with defer, so it should be prioritized to push people towards that.
I do not subscribe to that idea because with RAII you can still have batched drops, the only difference between the two defaults is that with defer the failure mode is leaks, while with RAII the failure mode is more code than you'd otherwise would have.
> rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.
And how would that compiler work? Magic? Maybe clairvoyance?
> I’d be much more excited about that promise [memory safety in Rust] if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.
That exists; it's called garbage collection.
If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.
The faster computers get, the more the GC problem is way overblown apart from super-low-latency niches. Even AAA games these days happily run on GC languages.
There is a prominent contributor to HN whose profile says they dream of a world where all languages offer automatic memory management and I think about that a lot, as a low-level backend engineer. Unless I find myself writing an HFT bot or a kernel, I have zero need to care about memory allocation, cycles, and who owns what.
GC doesn't exactly solve your memory problem; it typically means that your memory problem gets deferred quite far until you can't ignore that. Of course it is also quite likely that your program will never grow to that point, which is why GC works in general, but also why there exists a desire to avoid it when makes sense.
In games you have 16ms to draw billion+ triangles (etc.).
In web, you have 100ms to round-trip a request under abitarily high load (etc.)
Cases where you cannot "stop the world" at random and just "clean up garbage" are quite common in programming. And when they happen in GC'd languages, you're much worse off.
Azul C4 is not a pauseless GC. In the documentation it says "C4 uses a 4-stage concurrent execution mechanism that eliminates almost all stop-the-world pauses."
> C4 differentiates itself from other generational garbage collectors by supporting simultaneous-generational con-
currency: the different generations are collected using concurrent (non stop-the-world) mechanisms
(As with any low-pause collector, the rest of your code is uniformly slower by some percentage because it has to make sure not to step on the toes of the concurrently-running collector.)
The benchmarks game shows memory use with default GC settings (as a way to uncover space-time tradeoffs), mostly for tiny tiny programs that hardly use memory.
Less difference — mandelbrot, k-nucleotide, reverse-complement, regex-redux — when the task requires memory to be used.
> Less difference — mandelbrot, k-nucleotide, reverse-complement, regex-redux — when the task requires memory to be used.
yes, I referred to benchmarks with large memory consumption, where Java still uses from 2 to 10(as in binary tree task) more memory, which is large overhead.
That’s fair, no resource is unlimited. My point is that memory is usually the least of one’s problem, even on average machines. Productivity and CPU usage tend to be the bottleneck as a developer and a user. GC is mostly a performance problem rather than a memory one, and well-designed language can minimize the impact of it. (I am working on a message-passing language, and only allowing GC after a reply greatly simplifies the design and performance characteristics)
>My point is that memory is usually the least of one’s problem, even on average machines.
The average machine a person directly interacts with is a phone or TV at this point, both of which have major BoM restrictions and high pixel density displays. Memory is the primary determination of performance in such environments.
On desktops and servers, CPU performance is bottlenecked on memory - garbage collection isn't necessarily a problem there, but the nature of separate allocations and pointer chasing is.
On battery, garbage collection costs significant power and so it gets deferred (at least for full collections) until it's unavoidable. In practice this means that a large amount of heap space is "dead", which costs memory.
Your language sounds interesting - I've always thought that it would be cool to have a language where generational GC was exposed to the programmer. If you have a server, you can have one new generation arena per request with a write barrier for incoming references from the old generation to the new. Then you could perform young GC after every request, only paying for traversal+move of objects that survived.
eh, there are GC languages famous for high uptimes and deployed in places where it "basically runs forever with no intervention", so in practice with the right GC and application scope, "deferring the concern till the heat death of the universe" (or until a CVE forces a soft update) is possible.
That's exactly why I said "it is also quite likely that your program will never grow to that point". Of course you need non-trivial knowledge to determine whether your application and GC satisfy that criteria.
>Even AAA games these days happily run on GC languages.
Which games are these? Are you referring to games written in Unity where the game logic is scripted in C#? Or are you referring to Minecraft Java Edition?
I seriously doubt you would get close to the same performance in a modern AAA title running in a Java/C# based engine.
You're right that there is a difference between "engine written largely in C++ and some parts are GC'd" vs "game written in Java/C#", but it's certainly not unheard of to use a GC in games, pervasively in simpler ones (Heck, Balatro is written in Lua!) and sparingly in even more advanced titles.
Sure, but you write games in it in Lua. That Love2d is implemented in C++ (GitHub says like 80% C++ and 10% C) doesn't mean that you're writing the game in it. In my understanding, Love2d uses reference counting (which is still GC) for its own stuff, and integrates those into Lua's tracing GC.
Well, 1) the temporary allocator strategy; and 2) `defer` kinda go against the spirit of this observation.
With (1) you get the benefits of GC with, in many cases, a single line of code. This handles a lot of use cases. Of those it doesn't, `defer` is that "other single line".
I think the issue being raised is the "convenience payoff for the syntax/semantics burden". The payoff for temp-alloc and defer is enormous: you make the memory management explicit so you can easily see-and-reason-about the code; and it's a trivial amount of code.
There feels something deeply wrong with RAII-style langauges.. you're having the burden to reason about implicit behaviour, all the while this behaviour saves you nothing. It's the worst of both worlds: hiddenness and burdensomeness.
Neither of those gives memory safety, which is what the parent comment is about. If you release the temporary allocator while a pointer to some data is live, you get use after free. If you defer freeing a resource, and a pointer to the resource lives on after the scope exit, you get use after free.
The dialetic beings with OP, and has pcw's reply and then mine. It does not begin with pcw's comment. The OP complains about rust not because they imagine Jai is memory safe, but because they feel the rewards of its approach significantly outweight the costs of Rust.
pcw's comment was about tradeoffs programmers are willing to make -- and paints the picture more black-and-white than the reality; and more black and white than OP.
I don't understand this take at all. The borrow checker is automatic and works across all variables. Defer et al requires you remember to use it, and use it correctly. It takes more effort to use defer correctly whereas Rust's borrow checker works for you without needing to do much extra at all! What am I missing?
What you're missing is that Rust's borrowing rules are not the definition of memory safety. They are just one particular approach that works, but with tradeoffs.
Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't.
That rule prevents memory corruption, but it outlaws many programs that break the rule yet actually are otherwise memory safe, and it also outlaws programs that follow the rule but wherein the compiler isn't smart enough to prove that the rule is being followed. That annoyance is the main thing people are talking about when they say they are "fighting the borrow checker" (when comparing Rust with languages like Odin/Zig/Jai).
That is true of `&mut T`, but `&mut T` is not the only way to do mutation in Rust. The set of possible safe patterns gets much wider when you include `&Cell<T>`. For example see this language that uses its equivalent of `&Cell<T>` as the primary mutable reference type, and uses its equivalent of `&mut T` more sparingly: https://antelang.org/blog/safe_shared_mutability/
> The borrow checker is automatic and works across all variables.
Not that I'm such a Rust hater, but this is also a simplification of the reality. The term "fighting the borrow checker" is these days a pretty normal saying, and it implies that the borrow checker may be automatic, but 90% of its work is telling you: no, try again. That is hardly "without needing to do much extra at all".
What's hilarious about "fighting the borrow checker" is that it's about the lexical lifetime borrow checking, which went away many years ago - fixing that is what "Non-lexical lifetimes" is about, which if you picked up Rust in the last like 4-5 years you won't even know was a thing. In that era you actually did need to "fight" to get obviously correct code to compile because the checking is only looking at the lexical structure.
Because this phrase existed, it became the thing people latch onto as a complaint, often even when there is no borrowck problem with what they were writing.
Yes of course when you make lifetime mistakes the borrowck means you have to fix them. It's true that in a sense in a GC language you don't have to fix them (although the consequences can be pretty nasty if you don't) because the GC will handle it - and that in a language like Jai you can just endure the weird crashes (but remember this article, the weird crashes aren't "Undefined Behaviour" apparently, even though that's exactly what they are)
As a Rust programmer I'm comfortable with the statement that it's "without needing to do much extra at all".
I appreciate what you're saying, though isn't undefined behavior having to do with the semantics of execution as specified by the language? Most languages outright decline to specify multiple threads of execution, and instead provide it as a library. I think C started that trend. I'm not sure if Jai even has a spec, but the behavior you're describing could very well be "unspecified" not "undefined" and that's a distinction some folks care about.
This being said, yes Rust is useful to verify those scenarios because it _does_ specify them, and despite his brash takes on Rust, Jon admits its utility in this regard from time to time.
> the behavior you're describing could very well be "unspecified" not "undefined" and that's a distinction some folks care about.
Nah, it's going to be Undefined. What's going on here is that there's an optimising compiler, and the way compiler optimisation works is you Define some but not all behaviour in your language and the optimiser is allowed to make any transformations which keep the behaviour you Defined.
Jai uses LLVM so in many cases the UB is exactly the same as you'd see in Clang since that's also using LLVM. For example Jai can explicitly choose not to initialize a variable (unlike C++ 23 and earlier this isn't the default for the primitive types, but it is still possible) - in LLVM I believe this means the uninitialized variable is poison. Exactly the same awful surprises result.
> because it is the kind of optimizing compiler you say it is
What other kind of optimisations are you imagining? I'm not talking about a particular "kind" of optimisation but the entire category. Lets look at two real world optimisations from opposite ends of the scale to see:
1. Peephole removal of null sequences. This is a very easy optimisation, if we're going to do X and then do opposite-of-X we can do neither and have the same outcome which is typically smaller and faster. For example on a simple stack machine pushing register R10 and then popping R10 achieves nothing, so we can remove both of these steps from the resulting program.
BUT if we've defined everything this can't work because it means we're no longer touching the stack here, so a language will often not define such things at all (e.g. not even mentioning the existence of a "stack") and thus permit this optimisation.
2. Idiom recognition of population count. The compiler can analyse some function you've written and conclude that it's actually trying to count all the set bits in a value, but many modern CPUs have a dedicated instruction for that, so, the compiler can simply emit that CPU instruction where you call your function.
BUT You wrote this whole complicated function, if we've defined everything then all the fine details of your function must be reproduced, there must be a function call, maybe you make some temporary accumulator, you test and increment in a loop -- all defined, so such an optimisation would be impossible.
>In that era you actually did need to "fight" to get obviously correct code to compile because the checking is only looking at the lexical structure.
NLL's final implementation (Polonius) hasn't landed yet, and many of the original cases that NLL were meant to allow still don't compile. This doesn't come up very often in practice, but it sure sounds like a hole in your argument.
What does come up in practice is partial borrowing errors. It's one of the most common complaints among Rust programmers, and it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
> What does come up in practice is partial borrowing errors.
For some people. For example, I personally have never had a partial borrowing error.
> it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
This is not for sure. That is, while it's code that could work, it's not obviously clear that it's correct. Rust cares a lot about the contract of function signatures, and partial borrows violate the signature, that's why they're not allowed. Some people want to relax that restriction. I personally think it's a bad idea.
> Rust cares a lot about the contract of function signatures, and partial borrows violate the signature
People want to be able to specify partial borrowing in the signatures. There have been several proposals for this. But so far nothing has made it into the language.
Just to give an example of where I've run into countless partial borrowing problems: Writing a Vulkan program. The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state. (of course, this is not safe, because you could have accidental mutable aliasing).
But in Rust, that just doesn't work. You get countless errors like "Can't call self.resize_framebuffer() because you've already borrowed self.grass_texture" (even though resize_framebuffer would never touch the grass texture), "Can't call self.upload_geometry() because you've already borrowed self.window.width", and so on.
So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
It would be so much nicer if you could instead annotate that resize_framebuffer only borrows self.framebuffer, and no other part of self.
> People want to be able to specify partial borrowing in the signatures.
That's correct. That's why I said "Some people want to relax that restriction. I personally think it's a bad idea."
> The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state.
Yes, I think that this style of programming is not good, because it creates giant balls of aliasing state. I understand that if the library you use requires you to do this, you're sorta SOL, but in the programs I write, I've never been required to do this.
> So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
Yes, this is the downstream effects of designing APIs this way. Breaking them up into smaller chunks of state makes it significantly more pleasant.
I am not sure that it's a good idea to change the language to make using poorly designed APIs easier. I also understand that reasonable people differ on this issue.
>Yes, this is the downstream effects of designing APIs this way. Breaking them up into smaller chunks of state makes it significantly more pleasant.
What they're describing is the downstream effect of not designing APIs that way. If you could have a single giant GraphicsState and define everything as a method on it, you would have to pass around barely any arguments at all: everything would be reachable from the &mut self reference. And either with some annotations or with just a tiny bit of non-local analysis, the compiler would still be able to ensure non-aliasing usage.
"functions that each take 20 parameters and return 5 values" is what you're forced to write in alternative to that, to avoid partial borrowing errors: for example, instead of a self.resize_framebuffer() method, a free function resize_framebuffer(&mut self.framebuffer, &mut self.size, &mut self.several_other_pieces_of_self, &mut self.borrowed_one_by_one).
I agree that the severity of this issue is highly dependent on what you're building, but sometimes you really do have a big ball of mutable state and there's not much you can do about it.
A lot has been written about this already, but again I think you're simplifying here by saying "once you get it". There's a bunch of options here for what's happening:
1. The borrow checker is indeed a free lunch
2. Your domain lends itself well to Rust, other domains don't
3. Your code is more complicated than it would be in other languages to please the borrow checker, but you are unaware because its just the natural process of writing code in Rust.
There's probably more things that could be going on, but I think this is clear.
I certainly doubt its #1, given the high volume of very intelligent people that have negative experiences with the borrow checker.
"But after an initial learning hump, I don't fight the borrow checker anymore" is quite common and widely understood.
Just like any programming paradigm, it takes time to get used to, and that time varies between people. And just like any programming paradigm, some people end up not liking it.
I'm not sure what you mean here, since in different replies to this same thread you've already encountered someone who is, by virtue of Rusts borrow checker design, forced to change his code in a way that is, to that person, net negative.
Again this person has no trouble understanding the BC, it has trouble with the outcome of satisfying the BC. Also this person is writing Vulkan code, so intelligence is not a problem.
> is quite common and widely understood
This is an opinion expressed in a bubble, which does not in any-way disprove that the reverse is also expressed in another bubble.
"common" does not mean "every single person feels that way" in the same sense that one person wanting to change their code in a way they don't like doesn't mean that every single person writing Rust feels the way that they do.
If your use case can be split into phases you can just allocate memory from an arena, copy out whatever needs to survive the phase at the end and free all the memory at once. That takes care of 90%+ of all allocations I ever need to do in my work.
For the rest you need more granular manual memory management, and defer is just a convenience in that case compared to C.
I can have graphs with pointers all over the place during the phase, I don't have to explain anything to a borrow checker, and it's safe as long as you are careful at the phase boundaries.
Note that I almost never have things that need to survive a phase boundary, so in practice the borrow checker is just a nuissance in my work.
There other use cases where this doesn't apply, so I'm not "anti borrow checker", but it's a tool, and I don't need it most of the time.
You can explain this sort of pattern to the borrow checker quite trivially: slap a single `'arena` lifetime on all the references that point to something in that arena. This pattern is used all over the place, including rustc itself.
(To be clear I agree that this is an easy pattern to write correctly without a borrow checker as well. It's just not a good example of something that's any harder to do in Rust, either.)
I remember having multiple issues doing this in rust, but can't recall the details. Are you sure I would just be able to have whatever refs I want and use them without the borrow checker complaining about things that are actually perfectly safe? I don't remember that being the case.
Edit: reading wavemode comment above "Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't." that I think was at least one of the problems I had.
The main issue with using arenas in Rust right now is that the standard library collections use the still-unstable allocator API, so you cannot use those with them. However, this is a systems language, so you can use whatever you want for your own data structures.
> reading wavemode comment above
This is true for `&mut T` but that isn't directly related to arenas. Furthermore, you can have multiple mutable aliased references, but you need to not use `&mut T` while doing so: you can take advantage of some form of internal mutability and use `&T`, for example. What is needed depends on the circumstances.
wavemode's comment only applies to `&mut T`. You do not have to use `&mut T` to form the reference graph in your arena, which indeed would be unlikely to work out.
Not sure about the implicit behavior. In C++, you can write a lot of code using vector and map that would require manual memory management in C. It's as if the heap wasn't there.
Feels like there is a beneficial property in there.
> Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you.
This is true but there is a middle ground. You use a reasonably fast (i.e. compiled) GC lang, and write your own allocator(s) inside of it for performance-critical stuff.
Ironically, this is usually the right pattern even in non-GC langs: you typically want to minimize unnecessary allocations during runtime, and leverage techniques like object pooling to do that.
IOW I don't think raw performance is a good argument for not using GC (e.g. gamedev or scientific computing).
Not being able to afford the GC runtime overhead is a good argument (e.g. embedded programs, HFT).
It's difficult to design a language which has good usability both with and without a GC. Can users create a reference which points to the interior of an object? Does the standard library allocate? Can the language implement useful features like move semantics and destructors, when GCed objects have an indefinite lifetime?
You'd almost end up with two languages in one. It would be interesting to see a language fully embrace that, with fast/slow language dialects which have very good interoperability. The complexity cost would be high, but if the alternative is learning two languages rather than one...
I'm not saying you design a language with an optional GC, I'm saying the user can implement their own allocators i.e. large object pools nested in the GC-managed memory system. And then they get to avoid most of the allocation and deallocation overhead during runtime.
Sorry, I wasn't very clear - I think that using an object pool in a GCed language is like writing code in a dialect of that language which has no allocator.
The biggest "crime" of Jai is that it (soft-)launched like an open source programming language and didn't actually become open source shortly. There are so many programming languages that did go through the "beta" period and still remain open sourced all the time. Open source doesn't imply open governance, and most such languages are still evolved almost solely with original authors' judgements. It is fine for Jai to remain closed of course, but there is no practical reason for Jai to remain closed to this day. The resulting confusion is large enough to dismiss Jai at this stage.
To me this raises the question of whether this is a growing trend, or whether it's simply that languages staying closed source tends to be a death sentence for them in the long term.
Yep. Same dilemma as Star Citizen. If both just threw their hands up and said, "Done!", today then everyone would agree that a great product had been released and everyone would be mostly pleased. Instead, development has dragged on so long as to cast doubts over the goodwill of the founders. Now, Jai is unusable because it's difficult to trust Blow if he's willing to lie about that and Star Citizen is unplayable because the game was clearly released under false pretenses.
I also found this comment a bit strange. I'm not aware of a situation where this occurs, though he might be conflating creating an anonymous function with calling it.
Probably misspoke, returning or passing anonymous functions cause allocations for the closures, then calling them causes probably 4 or 5 levels of pointer chasing to get the data that got (invisibly) closed over
I don't think there is much pointer chasing at runtime. With lexically scoped closures it's only the compiler who walks the stack frames to find the referenced variable; the compiled function can point directly to the object in the stack frame. In my understanding, closed over variables have (almost) no runtime cost over "normal" local variables. Please correct me if I'm wrong.
I meant more like storing closures to be used later after any locals are out of the stack frame, but tbh that's an abstraction that also causes allocations in C++ and Rust. On the other hand, no idea how JS internals work but I know in python getting the length of an array takes five layers of pointer indirection so it very well could be pointer to closure object -> pointer to list of closed variables -> pointer to boxed variable -> pointer to number or some ridiculous thing like that.
In C++, lambda functions don't require dynamic memory allocations, only type erasure via std::function does (if the capture list is too large for small-functiom-optimization)
However, C++ lambdas don't keep the parent evironment alive, so if you capture a local variable by reference and call the lambda outside the original function environment, you have a dangling reference and get a crash.
JavaScript is almost always JIT’ed and Python is usually not, so I wouldn’t rely on your Python intuition when talking about JavaScript performance. Especially when you’re using it to suggest that JavaScript programmers don’t understand the performance characteristics of their code.
I didn't know much about Jai, and started reading it, and it really has (according to the article) some exciting features, but this caught my eye:
"... Much like how object oriented programs carry around a this pointer all over the place when working with objects, in Jai, each thread carries around a context stack, which keeps track of some cross-functional stuff, like which is the default memory allocator to ..."
It reminds me of GoLang's context, and it should've existed in any language dealing with multi-threading, as a way of carrying info about parent thread/process (and tokens) for trace propagation, etc.
I think the Zig community should worry more about when Jai is released. Esp given that Zig isn’t anywhere near 1.0 yet, and feature wise it’s a rather sparse language that has first mover advantage over alternatives, but as a language it actually doesn’t bring many new tools AND is bad with IDEs.
Jai similarly is hard for IDEs, but has much more depth and power.
While Zig has momentum, it will need to solidify it to become mainstream, or Jai has a good chance of disrupting Zig’s popularity. Basically Zig is Jai but minus A LOT of features, while being more verbose and annoyingly strict about things.
Odin on the other hand has no compile time and in general has different solutions compared to Zig & Jai with its rich set of builtin types and IDE friendliness.
And finally C3 which is for people who want the familiarity of C with improvement but still IDE friendliness with limited metaprogramming. This language is also less of an overlap with Jai than Zig is.
Amazingly, in agreement with most of this. The sooner that Jai is released, the more likely it will be a Zig popularity disruptor. And Zig is definitely vulnerable. It has a large amount of issues, is dealing with trying to maintain being a limited/simple language, and still far from 1.0.
Regardless of comptime, Odin and C3's public accessibility, and being close enough to Jai for folks to contemplate switching over, will eat at its public mind share. In both cases (be it Zig or Odin/C3), the longer that Jai keeps making the mistake of avoiding a full public release, the more it appears to be hurting itself. In fact, many would argue, that a bit of Jai's "shine" has already worn off. There are now many alternative languages out here, that have already been heavily influenced by it.
That would be the "best case" scenario (an inferior language beaten by better ones.)
But, no, the hubris of the language creator, whose arrogance is probably close to a few nano-Dijkstras, makes it entirely possible that he prefers _not_ releasing a superior language, out of spite for the untermenschen that would "desecrate" it by writing web servers inside it.
So I'm pretty convinced now that he will just never release it except to a couple of happy fews, and then he will die of cardio vascular diseases because he spent too much time sitting in a chair streaming nonsense, and the world will have missed an opportunity.
Then again, I'm just sad.
As John Stewart said: "on the bright side, I'm told that at some point the sun will turn supernova, and we'll all die."
> Software has been getting slower at a rate roughly equivalent to the rate at which computers are getting faster.
Cite?
This problem statement is also such a weird introduction to specifically this new programming language. Yes, compiled languages with no GC are faster than the alternatives. But the problem is and was not the alternatives. Those alternatives fill the vast majority of computing uses and work well enough.
The problem is compiled languages with no GC, before Rust, were bug prone, and difficult to use safely.
So -- why are we talking about this? Because jblow won't stop catastrophizing. He has led a generation of impressionable programmers to believe that we in some dark age of software, when that statement couldn't be further from the truth.
I carefully watched a number of the early jai language YouTube videos. Some of his opinions on non-programming topics are just goofy: I recall him ranting (and I wish I could find it again,) about the supposed pointlessness of logarithmic scales (decibels, etc.,) vs scientific notation and experiencing a pretty bad cringe spasm.
> He has led a generation of impressionable programmers to believe that we in some dark age of software, when that statement couldn't be further from the truth.
Have you actually used modern software?
There's a great rant about Visual Studio debugger which in recent versions cannot even update debugged values as you step through the program unlike its predecessors: https://youtu.be/GC-0tCy4P1U?si=t6BsHkHhoRF46mYM
And this is professional software. There's state of personal software is worse. Most programs cannot show a page of text with a few images without consuming gigabytes of RAM and not-insignificant percentages of CPU.
Uh, yes. When was software better (like when was America great)? Do you remember what Windows and Linux and MacOS were like in 90s? What exactly is the software we are comparing?
> There's a great rant about Visual Studio debugger
Yeah, I'm not sure these are "great rants" as you say. Most are about how software with different constraints than video games aren't made with same constraints as video games. Can you believe it?
I am told that in Visual Studio 2008, you could debug line by line, and it was smooth. Like there was zero lag. Then Microsoft rewrite VS from c++ into c# and it became much slower
Modern software is indeed slow especially when you consider how fast modern hardware is.
If you want to feel the difference, try highly optimised software against a popular one. For eg: linux vs windows, windows explorer vs filepilot, zed vs vscode.
> I am told that in Visual Studio 2008, you could debug line by line, and it was smooth. Like there was zero lag. Then Microsoft rewrite VS from c++ into c# and it became much slower
Not exactly a surprise? Microsoft made a choice to move to C# and the code was slower? Says precious little about software in general and much more about the constraints of modern development.
> If you want to feel the difference, try highly optimised software against a popular one. For eg: linux vs windows, windows explorer vs filepilot, zed vs vscode.
This reasoning is bonkers. Compare vastly different software with a vastly different design center to something only in the same vague class of systems?
If the question is "Is software getting worse or better?", doesn't it make more sense to compare newer software to the same old software? Again -- do you remember what Windows and Linux and MacOS were like in 90s? Do you not believe they have improved?
I have used windows since 20 years. I distinctly recall it becoming slower and painful over time despite using more powerful hardware.
But hey that could be nostalgia, right? We can't run win xp in today's world. Not is it recommend with lots of software ot being supported on win xp.
The same is case for Android. Android 4 has decent performance, then android 5 came and single handedly reduced performance and battery life. And again you can't go back due to newer apps no longer supporting old android version.
This is also seen with apple where newer os version is painful on older devices.
So, on what basis do you fairly say that "modern apps are slow"? That's why I say to use faster software as reference. I have linux and windows dual boot on same machine. An dthen difference in performance is night and day
> So, on what basis do you fairly say that "modern apps are slow"? That's why I say to use faster software as reference. I have linux and windows dual boot on same machine. An dthen difference in performance is night and day
Then you're not comparing old and new software. You're comparing apples and oranges. Neovim is comparable to VS Code in only the most superficial terms.
> Neovim is comparable to VS Code in only the most superficial terms.
Oh no. It can be compared in more than superficial terms. E.g. their team struggled to create a performant terminal in VS Code. Because the tech they chose (and the tech a lot of the world is using) is incapable of outputting text to the screen fast enough. Where "fast enough" is "with minimal acceptable speed which is still hundreds of times slower than a modern machine is capable of": https://code.visualstudio.com/blogs/2017/10/03/terminal-rend...
"our computers are thousands of times faster and more powerful than computers from the 90s and early 2000s, so of course it makes sense that 'constraints of development' make it impossible to make a working debugger on a modern supercomputer due to ... reasons. Doesn't mean this applies to all software ... which is written by same developers in same conditions on same machines in same languages for same OSes"
> so of course it makes sense that 'constraints of development' make it impossible to make a working debugger
All of these examples are Microsoft is not building X as well as it used to, which is entirely possible. However, Microsoft choosing to move languages says something entirely different to me than simply -- software somehow got worse. It says to me that devs weren't using C++ effectively. It says to me that a tradeoff was made re: raw performance for more flexibility and features. No one sets out to make slow software. Microsoft made a choice. At least think about why that might be.
> It says to me that a tradeoff was made re: raw performance for more flexibility and features.
It says that "our computers are thousands of times faster and more powerful than computers from the 90s and early 2000s" and yet somehow "flexibility and features" destroy all of those advancements.
> Do you remember what Windows and Linux and MacOS were like in 90s? What exactly is the software we are comparing?
Yes, yes I do.
Since then the computer have become several orders of magnitude more powerful. You cannot even begin to imagine how fast and powerful our machines are.
And yet nearly everything is barely capable of minimally functioning. Everything is riddled with loading screens, lost inputs, freeze frames and janky scrolling etc. etc. Even OS-level and professional software.
I now have a AMD Ryzen 9 9950X3D CPU, GeForce RTX 5090 GPU, DDR5 6000MHz RAM and M.2 NVME disks. I should not even see any loading screen, or any operation taking longer than a second. And yet even Explorer manages to spend seconds before showing contents of some directories.
I can see the appeal if there is a need for stronger metaprogramming. Not that Zig is terrible in this area, it is just that Jon's language is much more powerful in that area at this stage.
That being said, I do see an issue with globally scoped imports. It would be nice to know if imports can be locally scoped into a namespace or struct.
In all, whether it's compete or coexist (I don't believe the compiler for Jon's language can handle other languages so you might use Zig to compile any C or C++ or Zig), it will be nice to see another programming language garner some attention and hopefully quell the hype of others.
wow... I did not get that _at all_ ; opinionated maybe, do I have to share all these opinions to the degree to which they've been expressed? No, but condescending? To whom? To duck typed languages?
It's condescending to the people who've noticed they make mistakes and so value a language which is designed accordingly:
"So, put simply, yes, you can shoot yourself in the foot, and the caliber is enormous. But you’re being treated like an adult the whole time"
That is, those of us who've noticed we make mistakes aren't adults we're children and this is a proper grown-up language -- pretty much the definition of condescending.
I can't tell if you're joking or not, but if you aren't, no one is calling you a child. The article is obviously saying that the compiler doesn't stop you from doing dumb things, which is a privilege generally only extended to adults. Nobody is saying anyone who makes mistakes is a child.
If you feel this article is smug and condescending, don't start watching the language designer's stream too soon.
The least you can say is that he is _opinionated_. Even his friend Casey Muratori is "friendly" in comparison, at least trying to publish courses to elevate us masses of unworthy typescript coders to the higher planes of programming.
Jblow just want you to feel dumb for not programming right. He's unforgiving, Socrate's style.
The worst thing is : he might be right, most of the time.
We would not know, cause we find him infuriating, and, to be honest, we're just too dumb.
In my experience programming is more about formalising the domain of the problem than it is about shuffling bits around. Take a minute more than need and you'll lose hundreds. Get the answer wrong? Lose millions. Domains where you deprioritise correctness for speed just... don't seem that interesting too me. No need to look down on memory managed languages. Personally, Haskell and APL impressive me more, but I don't have shit on the author for being stuck in an imperative paradigm.
I am not interested. I am just trying to code with C3 and make some binding with another language like C and Zig, it is quiet easy and fun. I think it's enough for me to learn kinds of language than using jai that never released the compiler to the public till now.
Surprising deep and level headed analysis. Jai intrigues me a lot, but my cantankerous opinion is that I will not waste my energy learning a closed source language; this ain’t the 90s any more.
I am perfectly fine for it to remain a closed alpha while Jonathan irons out the design and enacts his vision, but I hope its source gets released or forked as free software eventually.
What I am curious about, which is how I evaluate any systems programming language, is how easy it is to write a kernel with Jai. Do I have access to an asm keyword, or can I easily link assembly files? Do I have access to the linker phase to customize the layout of the ELF file? Does it need a runtime to work? Can I disable the standard library?
Iirc, pretty sure jblow has said he's open sourcing it. I think the rough timeline is: release game within the year, then the language (closed-source), then open source it.
Tbh, I think a lot of open source projects should consider following a similar strategy --- as soon as something's open sourced, you're now dealing with a lot of community management work which is onerous.
> as soon as something's open sourced, you're now dealing with a lot of community management work which is onerous.
This is a common misconception. You can release the source code of your software without accepting contributions.
> without accepting contributions.
it's not even contributions, but that other people might start asking for features, discuss direction independently (which is fine, but jblow has been on the record saying that he doesn't want even the distraction of such).
The current idea of doing jai closed sourced is to control the type of people who would be able to alpha test it - people who would be capable of overlooking the jank, but would have feedback for fundamental issues that aren't related to polish. They would also be capable of accepting alpha level completeness of the librries, and be capable of dissecting a compiler bug from their own bug or misuse of a feature etc.
You can't get any of these level of control if the source is opened.
You can simply ignore them. This worked for many smaller programming languages so far, and there exist enough open source softwares that are still governed entirely by a small group of developers. The closedness of Jai simply means that Blow doesn't understand this aspect of open source.
Ignoring people is by itself tedious and onerous. Knowing what I do about him and his work, and having spent some time watching his streams, I can say with certainly that he understands open source perfectly well and has no interest -- nor should he -- in obeying any ideology, yours for instance, as to how it's supposed to be handled, if it doesn't align with what he wants. He doesn't care whether he's doing open source "correctly."
Yeah, he is free to do anything as he wants, but I'm also free to ignore his work due to his choice. And I don't think my decision is unique to me, hence the comment.
[flagged]
Maybe there's aspirations to not be a "smaller programming language" and he'd rather not cause confusion and burn interested parties by having it available.
Releasing it when you're not ready to collect any upside from that decision ("simply ignore them") but will incur all the downside from a confused and muddled understanding of what the project is at any given time sounds like a really bad idea.
In that case the release interval can be tweaked, just frequent enough to keep people interested.
It seems to be there's already enough interest for the closed beta to work.
A lot of things being open sourced are using open source as a marketing ploy. I'm somewhat glad that jai is being developed this way - it's as opinionated as it can be, and with the promise to open source it after completion, i feel it is sufficient.
Yep. A closed set of core language designers who have exclusive right to propose new paths for the language to take while developing fully Free and in the open is how Zig is developing.
I believe sqlite does this.
That kind of means jack squat though. Jai is an unfinished *programming language*, Sqlite is an extremely mature *database*.
What chii is suggesting is open sourcing Jai now may cause nothing but distractions for the creator with 0 upside. People will write articles about its current state, ask why it's not like their favorite language or doesn't have such-and-such library. They will even suggest the creator is trying to "monopolize" some domain space because that's what programmers do to small open source projects.
That's a completely different situation from Sqlite and Linux, two massively-funded projects so mature and battle-tested that low-effort suggestions for the projects are not taken seriously. If I write an article asking Sqlite to be completely event-source focused in 5 years, I would be rightfully dunked on. Yet look at all the articles asking Zig to be "Rust but better."
I think you can look at any budding language over the past 20 years and see that people are not kind to a single maintainer with an open inbox.
We can muse about it all day, the choice is not ours to make. I simply presented the reality that other succcessful open source projects exist that were also in 'early development state'.
There are positives and negatives to it, I'm not naive to the way the world works. People have free speech and the right to criticise the language, with or without access to the compiler and toolchain itself, you will never stop the tide of crazy.
I personally believe that you can do opensource with strong stewardship even in the face of lunacy, the sqlite contributions policy is a very good example of handling this.
Closed or open, Blow will do what he wants. Waiting for a time when jai is in an "good enough state" will not change any of the insanity that you've mentioned above.
I don't have a stake in this particular language or its author, I was just discussing the pros and cons of the approach.
> Waiting for a time when jai is in an "good enough state" will not change any of the insanity that you've mentioned above.
I outlined some reasons why I think it would, and I think there's good precedent for that.
> the choice is not ours to make
I never said it was.
> People have free speech
I don't think I argued people don't have free speech? This is an easily defensible red herring to throw out, but it's irrelevant. People can say whatever they want on any forum, regardless of the projects openness. I am merely suggesting people are less inclined to shit on a battle-tested language than a young, mold-able one.
Famously, yes: https://sqlite.org/copyright.html (see "Open-Source, not Open-Contribution")
By my reading, the restriction seems to simply impose some (reasonable?) legal restrictions on contributions rather than ban them out of principle.
Interesting, they've softened their stance. Today, it reads
> In order to keep SQLite in the public domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from people who have not submitted an affidavit dedicating their contribution into the public domain.
But it used to read
> In order to keep SQLite in the public domain and ensure that the code does not become contaminated with proprietary or licensed content, the project does not accept patches from unknown persons.
(I randomly picked a date and found https://web.archive.org/web/20200111071813/https://sqlite.or... )
Seems to be hardened not softened: a person who has submitted an affidavit dedicating code fo the public domain is at least minimally known, but a person may be known without submitting an affidavit, so the new form is strictly a stronger restriction than the old one.
I claim the edit is neither a hardening nor a softening but rather a clarification and an attempt to better explain the original intent.
>You can simply ignore them.
You say this now but between 2013 - around 2023, The definition of Open source is that if you dont engage with the community and dont accept PRs it is not open source. And people will start bad mouth the project around the internet.
Working on a project is hard enough as it is.
Linux doesn't take PRs on github, and sqlite doesn't take patches. Open Source isn't a community model, only a license model.
>Open Source isn't a community model, only a license model.
Again, not between 2015 - ~2023. And after what happened I dont blame people who dont want to do it.
So your position is that Linux no longer counts as open source?
Linux started before 2013? So did SQLite? And both are not even comparable as they were the dominant force already and not a new started project.
And in case you somehow thinks I am against you. I am merely pointing out what happened between 2013 - 2023. I believe you were also one of the only few on HN who fought against it.
> if you dont engage with the community and dont accept PRs it is not open source
You'd be really hard pressed to find somebody who doesn't consider SQLite to be open source.
That was never the definition of open source. That may have been how people were using it, but they were in error if so.
Well except no one pushed against it at the time. Worth remember that.
Open source softwares with closed development model have existed for a very long time so that should have been considered no matter it was considered as open source or not. (And I think it was around 2000s, not 2010s, when such misconception was more widespread.)
I dont deny what you said. I am merely pointing out this isn't a popular modem or opinion between that time line.
I don't think the issue is just contributions. It's the visibility.
When you're a somewhat famous programmer releasing a long anticipated project, there's going to be a lot of eyes on that project. That's just going to come with hassle.
> That's just going to come with hassle.
Well, it is the public internet, people are free to discuss whatever they come across. Just like you're free to ignore all of them, and release your software Bellard-style (just dump the release at your website, see https://bellard.org/) without any bug tracker or place for people to send patches to.
One is also free to not provide the food for discussion, that's the choice jblow made.
Timing IS important, releasing too early can kill public opinion on a project.
So can announcing too early. See: Duke Nukem Forever. Or in the language domain, V-lang.
Having a lot of eyes on it is only a problem if you either have a self-esteem problem and so the inevitable criticism will blow you up or, you've got an ego problem and so the inevitable criticism will hurt your poor fragile ego. I think we can be sure which of these will be a problem for Jonathan "Why didn't people pay $$$ for a remaster of my old game which no longer stands out as interesting?" Blow.
He routinely livestreams himself working on the language. He doesn't seem afraid of attention.
yep and JBlow is a massive gatekeeper who discourages people from learning programming if he doesn't believe they can program the way he thinks a programmer should. He is absolutely running from any criticism that will hurt his enormous yet incredibly fragile ego.
The hate he is receiving is bizarre. It takes guts to be opinionated - you are effectively spilling your mind (and heart) to people. And yet some people will assume the worst about you even if it's an exact inversion of the truth.
Opinionated people are polarizing, it makes perfect sense.
It's not a "misconception". Open source implying open contributions is a very common stance, if not even the mainstream stance. Source availability is for better or for worse just one aspect of open source.
It is a misconception. Open source doesn’t mean the maintainer needs to interact with you. It just means you can access the code and do your own fork with whatever features you like.
Open Source definition ( https://opensource.org/osd ) says nothing about community involvement or accepting contributions. It may be common, but it is not necessary, required or even hinted at in the license.
Open source is not a philosophy, it is a license.
For many it is very much a philosophy, a principle, and politics. The OSI is not the sole arbiter of what open source is, and while their definition is somewhat commonly referred to, it is not the be all end all.
Sovereign citizens believe they dont need to adhere to the law, individual belief sadly doesn't override reality.
I could say the same about the practical reality of open contributions being extremely heavily interwoven with open source.
We're debating made up stuff here. The reality is all in our collective heads.
Would you say that SQLite is not open source?
Yes. I'd call it source available instead. Although it does have some hallmarks of open source, such as its funding.
Source available is already a well understood term that does not mean this.
https://en.wikipedia.org/wiki/Source-available_software
Reading the preamble there, and the parapgraph after that, I find what I said to be consistent what the page is saying there.
> Any software is source-available in the broad sense as long as its source code is distributed along with it, even if the user has no legal rights to use, share, modify or even compile it.
You have the legal right to use, share, modify, and compile, SQlite's source. If it were Source Available, you'd have the right to look at it, but do none of those things.
But not necessarily any of the other things! Big difference. Please read it again.
That's your assertion, I am saying that it is not correct in the general way that people understand the terms "open source" and "source available."
I doubt we're going to come to an agreement here, though, so I'll leave it at that.
> even if the user has no legal rights to use, share, modify or even compile it.
Emphasis on even. It can have such rights, or not, the term may still apply regardless.
IMO the main thing they're risking by open sourcing it is adoption. Keeping it closed source is a pretty clear sign to the rest of the world that the language is not ready for widespread adoption. As soon as you open source it, even if you mark it as alpha, you'll end up with people using the language, and breaking changes will at that point break people's code.
> language is not ready for widespread adoption.
Keeping things closed source is one way of indicating that. Another is to use a license that contains "THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED [...]" and then let people make their own choices. Just because something is open source doesn't mean it's ready for widespread adoption.
You're describing pretty much every popular open source license here, including the Linux kernel(GPLv2). This doesn't set the expectation that things can and will break at any time. That's also not the approach maintainers take with most serious projects.
If you have users, then breaking changes will break those users. This is true regardless of how many warranty disclaimers you have.
There is a lot of experimentation going on as well. Few months ago 2 new casting syntaxes were added for users to evaluate. The plan is to keep only one and remove the others before release.
An argument can easily be make that Jai could have been released as closed-source, some time ago. Many fans and the curious, just want to be able to get their hands on it.
Jon is not going to stop public reaction nor will Jai be perfect, regardless of when he releases. At least releasing sooner, allows it to keep momentum. Not just generated by him, but by third parties, such as books and videos on it. Maybe that's where Jon is making a mistake. Not allowing others to help generate momentum.
That’s what I meant by forked. If Jonathan wants to keep his branch closed source, that’s fine, as long as he cuts a release, gives it a GNU license and calls it OpenJai or something. He doesn’t have to deal with the community, somebody will do that for him.
Apparently not only do the 90's approach still work pretty well when the language comes with a piece of green coloured hardware, all the ongoing returns to 90's licensing models prove that the free beer approach isn't working when the goal is to build a sustainable business of out the technology.
Some comparison with D:
> Do I have access to an asm keyword,
Yes, D has a builtin assembler
> or can I easily link assembly files?
Yes
> Do I have access to the linker phase to customize the layout of the ELF file?
D uses standard linkers.
> Does it need a runtime to work?
With the -betterC switch, it only relies on the C runtime
> Can I disable the standard library?
You don't need the C runtime if you don't call any of the functions in it.
Thanks. I haven’t played with D since it also had a closed source implementation (10+ years ago) and never kept up with its newer development. I should check it out again.
D is Boost licensed front to back, which is the free'est license out there.
I don't get what's up with the runtime hysteria. All languages have a runtime maybe except for assembler. And linux kernel itself is infamous for being not C by a large margin. And in general remove something important from any program and it will stop working.
If you do embedded work, you often want to be in total control of all memory allocations. So it is good to know that the compiler will not produce some invisible heap allocations and there is a useful subset of the standard libray that does not use them either.
There is this streamer that does a lot of interesting language exploring on his own. I don't say you will find all the answers to your questions, but I think you will get a good sense of what you can or cannot do in jai : https://www.youtube.com/results?search_query=Tsoding+jai
Tsoding is great. Don’t be put off by the memelord persona, he’s a genuinely smart guy always exploring some interesting language or library, or reimplementing something from scratch to truly understand it.
> Don’t be put off by the memelord persona
One can be put off by whatever one is put off by. I've gotten to the point where I realized that I don't need to listen to everyone's opinion. Everyone's got some. If one opinion is important, it will like the shared by more than one person. From that it follows that there's no need to subject oneself to specific people one is put off by. Or put another way: if there's an actionable critique, and two people are stating it, and one is a dick and the other isn't, I'll pay attention to the one who isn't a dick. Life's to short to waste it with abrasive people, regardless of whether that is "what is in their heart" or a constructed persona. The worst effect of the "asshole genius" trope is that it makes a lot of assholes think they are geniuses.
Geez, no need to get upset over a recommendation. If you watch him or don’t, I don’t care either way.
I don’t see what is on topic or constructive about your outburst.
> If one opinion is important, it will like the shared by more than one person.
Sometimes nobody else shares the opinion and the “abrasive person” is both good-hearted and right in their belief: https://en.m.wikipedia.org/wiki/Ignaz_Semmelweis
Personally, I’d rather be the kind of person who could have evaluated Semmelweis’s claims dispassionately rather than one who reflexively wrote him off because he was strident in his opinions. Doctors of the second type tragically shortened the lives of those under their care!
Being abrasive is different from being a "memelord." The former is excusable and socially valuable and politically healthy, even essential. The latter is immature, antisocial, and socially and politically corrosive.
> Don’t be put off by the memelord persona
If it's a persona, then he's at best a performer and entertainer pandering to an audience that enjoys or relates to immature, insufferable people. If it isn't a persona, then he's just an immature, insufferable person.
No, thank you. Either way, the result is psychologically, socially, and politically corrosive and typically attracts a horrendous, overall obnoxious audience.
You can also watch Jonathan Blow himself writing a commercial game and developing jai on stream: https://www.twitch.tv/j_blow
Is he actually doing that or is he doing what Casey Muratori's doing with Handmade Hero and taking almost a decade to implement a debug room for a generic top-down Zelda clone?
You can watch the streams and decide for yourself.
In a recent interview he mentioned they are aiming for a release later this year: https://youtu.be/jamU6SQBtxk?si=nMTKbJjZ20YFwmaC
I did not know about this, I will have a look, thanks !
I have my doubts with Jai, the fact that Blow & co seems to have major misunderstandings with regards to RAII doesn't lend much confidence.
Also a 19,000 line C++ program(this is tiny) does not take 45 minutes unless something is seriously broken, it should be a few seconds at most for full rebuild even with a decent amount of template usage. This makes me suspect this author doesn't have much C++ experience, as this should have been obvious to them.
I do like the build script being in the same language, CMake can just die.
The metaprogramming looks more confusing than C++, why is "sin"/"cos" a string?
Based on this article I'm not sure what Jai's strength is, I would have assumed metaprogramming and SIMD prior, but these are hardly discussed, and the bit on metaprogramming didn't make much sense to me.
> Also a 19,000 line C++ program(this is tiny) does not take 45 minutes unless something is seriously broken
Agreed, 45 minutes is insane. In my experience, and this does depend on a lot of variables, 1 million lines of C++ ends up taking about 20 minutes. If we assume this scales linearly (I don't think it does, but let's imagine), 19k lines should take about 20 seconds. Maybe a little more with overhead, or a little less because of less burden on the linker.
There's a lot of assumptions in that back-of-the-envelope math, but if they're in the right ballpark it does mean that Jai has an order of magnitude faster builds.
I'm sure the big win is having a legit module system instead of plaintext header #include
It depends heavily on features used, too. C++ without templates compiles nearly as quickly as C.
For 1 million lines of C++ to take 20 minutes you must be building using a single core.
I seriously doubt that any of them have trouble understanding a concept as simple as RAII.
Yeah it's weird but the author of this post claiming that defer can replace RAII kinda suggests that. RAII isn't just about releasing the resource you acquired in the current scope in the same scope. You can pass the resource across multiple boundaries with move semantics and only at the end when it's no longer needed the resources will be released.
I don't get the point, what does this have to do with defer?
The author of the post claims that defer eliminates the need for RAII.
Well, goto also eliminates the "need" but language features are about making life easier, and life is much easier with RAII compared to having only defer.
I got that, but the I don't see what the example of move semantics has to do with RAII or defer.
It makes things easier. Usually the move constructor (or move assignment operator) will cause the moved-from object to stop being responsible for releasing a resource, moving the responsibility to the moved-to object. Simplest example: move- construct unique-ptr X from unique-ptr Y. When X is destroyed it will free the memory, when Y is destroyed it will do nothing.
So you can allocate resource in one function, then move the object across function boundaries, module boundaries, into another object etc. and in the end the resource will be released exactly once when the final object is destroyed. No need to remember in each of these places along the path to release the resource explicitly if there's an error (through defer or otherwise).
I agree that it makes some things easier (at the expense of managing constructors/destructors), I'm disputing the blanket assertion that it's superior to manual management, in the context of Jai (and Odin). You're also introducing a reference count, but that's besides the point.
In Jai/Odin, every scope has default global and temp allocators, there's nothing stopping you from transferring ownership and/or passing pointers down the callstack. Then you either free in the last scope where the pointer lives or you pick a natural lifetime near the top of the callstack, defer clear temp there, and forget about it.
You may also want to pass a resource through something like a channel, promise/future pair or similar. So it's not just down/up the callstack, sometimes it's "sideways". In those cases RAII is a life savior. Otherwise you have to explicitly remember about covering all possibilities: - what if resource never enters the channel - what if it enters the channel but never gets retrieved on the other side - what if the channel gets closed - what if other side tries to retrieve but cancels
Or you leak the resource.
> You're also introducing a reference count, but that's besides the point.
How so? RAII absolutely doesn't imply reference counting.
Honestly I concur. Out of interest in what sort of methods they came up with to manage memory, I checked out the language's wiki, and not sure if going back to 1970s C (with the defer statement on top) is an improvement. You have to write defer everywhere, and if your object outlives the scope of the function, even that is useless.
I'm sure having to remember to free resources manually has caused so much grief, that they decided to come up with RAII, so an object going out of scope (either on the stack, or its owning object getting destroyed) would clean up its resources.
Compared to a lot of low-level people, I don't hate garbage collection either, with a lot of implementations reducing to pointer bumping for allocation, which is an equivalent behavior to these super-fast temporary arenas, with the caveat that once you run out of memory, the GC cleans up and defragments your heap.
If for some reason, you manage to throw away the memory you allocated before the GC comes along, all that memory becomes junk at zero cost, with the mark-and-sweep algorithm not even having to look at it.
I'm not claiming either GC or RAII are faultless, but throwing up your hands in the air and going back to 1970s methods is not a good solution imo.
That being said, I happen to find a lot that's good about Jai as well, which I'm not going to go into detail about.
There is no RAII in his language. Why would you care if he understands it or not?
What an odd take. It is like saying: there is no addition semantic in his language, why would you care if he understands it or not?
This take is equally bizarre. Most languages have an addition semantic. Most languages do not have RAII. That's, by and large, a C++ thing. Jai does NOT have RAII. So, again, why would anybody care what his opinion on RAII is?
> The net effect of this is that the software you’re running on your computer is effectively wiping out the last 10-20 years of hardware evolution; in some extreme cases, more like 30 years.
As an industry we need to worry about this more. I get that in business, if you can be less efficient in order to put out more features faster, your dps[0] is higher. But as both a programmer and an end user, I care deeply about efficiency. Bad enough when just one application is sucking up resources unnecessarily, but now it's nearly every application, up to and including the OS itself if you are lucky enough to be a Microsoft customer.
The hardware I have sitting on my desk is vastly more powerful that what I was rocking 10-20 years ago, but the user experience seems about the same. No new features have really revolutionized how I use the computer, so from my perspective all we have done is make everything slower in lockstep with hardware advances.
[0] dollars per second
I understand the attitude but I think it misses a few aspects.
We have far more isolation between software, we have cryptography that would have been impractical to compute decades ago, and it’s used at rest and on the wire. All that comes at significant cost. It might only be a few percent of performance on modern systems, and therefore easy to justify, but it would have been a higher percentage a few decades ago.
Another thing that’s not considered is the scale of data. Yes software is slower, but it’s processing more data. A video file now might be 4K, where decades ago it may have been 240p. It’s probably also far more compressed today to ensure that the file size growth wasn’t entirely linear. The simple act of replaying a video takes far more processing than it did before.
Lastly, the focus on dynamic languages is often either misinformed or purposefully misleading. LLM training is often done in Python and it’s some of the most performance sensitive work being done at the moment. Of course that’s because the actual training isn’t executing in a Python VM. The same is true for so much of “dynamic languages” though, the heavy lifting is done elsewhere and the actual performance benefits of rewriting the Python bit to C++ or something would often be minimal. This does vary of course, but it’s not something I see acknowledged in these overly simplified arguments.
Requirements have changed, software has to do far more, and we’re kidding ourselves if we think it’s comparable. That’s not to say we shouldn’t reduce wastage, we should! But to dismiss modern software engineering because of dynamic languages etc is naive.
> The hardware I have sitting on my desk is vastly more powerful that what I was rocking 10-20 years ago, but the user experience seems about the same.
Not even.
It used to be that when you clicked a button, things happened immediately, instead of a few seconds later as everything freezes up. Text could be entered into fields without inputs getting dropped or playing catch-up. A mysterious unkillable service wouldn't randomly decide to peg your core several times a day. This was all the case even as late as Windows 7.
At the same time it was also that you typed 9 characters in an 8 characters field and you p0wn3e the application.
>Text could be entered into fields without inputs getting dropped or playing catch-up
This has been a complaint since the DOS days that has always been around from my experience. I'm pretty sure it's been industry standard from its inception that most large software providers make the software just fast enough the users don't give up and that's it.
Take something like notepad in opening files. Large files take forever. Yet I can pop open notepad++ from some random small team and it opens the same file quickly.
> async/await, a pattern increasingly polluting Javascript and Python codebases in the name of performance
In JS world async/await was never about performance, it was always about having more readable code than Promise chain spagetti.
Jai's perpetual closed beta is such a weird thing... On the one hand, I sort of get that the developers don't want to waste their time and attention on too many random people trying to butt in with their ideas and suggestions. On the other hand, they are thereby wasting the time and attention of all the people who watched the development videos and read the blog posts, and now can do basically nothing with that knowledge other than slowly forget it. (Except for the few who take the ideas and incorporate them into their own languages).
The reality of a project like this is that to get it right (which is by the creator's standards, no one else's) takes time. Add on top of that Blow and Thekla are building games with this to dogfood it which takes time, too.
Sadly, there exists a breed of developer that is manipulative, obnoxious, and loves to waste time/denigrate someone building something. Relatively few people are genuinely interested (like the OP) in helping to develop the thing, test builds, etc. Most just want to make contributions for their Github profile (assuming OSS) or exercise their internal demons by projecting their insecurities onto someone else.
From all of the JB content I've seen/read, this is a rough approximation of his position. It's far less stressful to just work on the idea in relative isolation until it's ready (by whatever standard) than to deal with the random chaos of letting anyone and everyone in.
This [1] is worth listening to (suspending cynicism) to get at the "why" (my editorialization, not JB).
Personally, I wish more people working on stuff were like this. It makes me far more likely to adopt it when it is ready because I can trust that the appropriate time was put in to building it.
[1] https://www.youtube.com/watch?v=ZY0ZmeYmyjU
I get that. But if you want to work in relative isolation, would it be too much to ask to not advertise the project publicly and wax poetic about how productive this (unavailable) language makes you? Having had a considerable interest in Jai in the past, I do feel a little bit cheated :) even though I realize no binding promises have been made.
> would it be too much to ask to not advertise the project publicly and wax poetic about how productive this (unavailable) language makes you
All the "public advertisement" he's done was a few early presentations of some ideas and then ... just live streaming his work
As well as "a few early presentations" (multiple hour+ conference talks) Jon keeps appearing on podcasts, and of course he's there to talk about this unavailable programming language although sometimes he does also talk about The Witness or Braid.
It's a common thing in programming language design and circles where some people like to form little cults of personality around their project. Curtis Yarvin did that with his Urbit project. V-Lang is another good example. I consider Elm an example as well.
They get a few "true believer" followers, give them special privileges like beta access (this case), special arcane knowledge (see Urbit), or even special standing within the community (also Urbit, although many other languages where the true believers are given authority over community spaces like discord/mailing list/irc etc.).
I don't associate in these spaces because I find the people especially toxic. Usually they are high drama because the focus isn't around technical matters but instead around the cult leader and the drama that surrounds him, defending/attacking his decisions, rationalizing his whims, and toeing the line.
Like this thread, where a large proportion is discussion about Blow as a personality rather than the technical merit of his work. He wants it that way, not so say that his work doesn't have technical merit, but that he'd rather we be talking about him.
One thing I want to add to the other (so far) good responses: They also seem to build Jai for a means to an end, which is: they are actively developing a game engine with it (to be used for more than one project) and a game, which is already in advanced stages.
If you consider a small team working on this, developing the language seriously, earnestly, but as a means to an end on the side, I can totally see why they think it may be the best approach to develop the language fully internally. It's an iterative develop-as-you-go approach, you're writing a highly specific opinionated tool for your niche.
So maybe it's best to simply wait until engine + game are done, and they can (depending on the game's success) really devote focus and time on polishing language and compiler up, stabilizing a version 1.0 if you will, and "package" it in an appropriate manner.
Plus: they don't seem to be in the "promote a language for the language's sake" game; it doesn't seem to be about finding the perfect release date, with shiny mascot + discord server + full fledged stdlib + full documentation from day one, to then "hire" redditors and youtubers to spread the word and have an armada of newbie programmers use it to write games... they seem to much rather see it as creating a professional tool aimed at professional programmers, particularly in the domain of high performance compiled languages, particularly for games. People they are targeting will evaluate the language thoroughly when it's out, whether that's in 2019, 2025 or 2028. And whether they are top 10 in some popularity contest or not, I just don't think they're playing by such metrics. The right people will check it out once it's out, I'm sure. And whether such a language will be used or not, will probably, hopefully even, not depend on finding the most hyped point in time to release it.
> It’s a simple keyword, but it singlehandedly eliminates the need for any kind of RAII.
What if you want to put a resource object (which needs a cleanup on destruction) into a vector then give up ownership of the vector to someone?
I write code in go now after moving from C++ and God do I miss destructors. Saying that defer eliminates need for RAII triggers me so much
There's a school of thought that correctly states that in that case it is very easy to cause expensive drop behavior to be ran for each element in the vector where a faster batch approach could instead be done, which is doable if not encouraged with defer, so it should be prioritized to push people towards that.
I do not subscribe to that idea because with RAII you can still have batched drops, the only difference between the two defaults is that with defer the failure mode is leaks, while with RAII the failure mode is more code than you'd otherwise would have.
> rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.
And how would that compiler work? Magic? Maybe clairvoyance?
[dead]
> I’d be much more excited about that promise [memory safety in Rust] if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.
That exists; it's called garbage collection.
If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.
The faster computers get, the more the GC problem is way overblown apart from super-low-latency niches. Even AAA games these days happily run on GC languages.
There is a prominent contributor to HN whose profile says they dream of a world where all languages offer automatic memory management and I think about that a lot, as a low-level backend engineer. Unless I find myself writing an HFT bot or a kernel, I have zero need to care about memory allocation, cycles, and who owns what.
Productivity >> worrying about memory.
GC doesn't exactly solve your memory problem; it typically means that your memory problem gets deferred quite far until you can't ignore that. Of course it is also quite likely that your program will never grow to that point, which is why GC works in general, but also why there exists a desire to avoid it when makes sense.
Not sure why you're down-voted, this is correct.
In games you have 16ms to draw billion+ triangles (etc.).
In web, you have 100ms to round-trip a request under abitarily high load (etc.)
Cases where you cannot "stop the world" at random and just "clean up garbage" are quite common in programming. And when they happen in GC'd languages, you're much worse off.
That's why it's good that GC algorithms that do not "stop the world" have been in production for decades now.
There are none, at least not production grade.
I have heard (and from when I investigated) erlangs GC is 'dont stop the world'.
Maybe my definition is bad though.
There are none, or you're not aware of their existence?
There are no production implementations of GC algorithms that don't stop the world at all. I know this because I have some expertise in GC algorithms.
I'm curious why you don't consider C4 to be production grade
Azul C4 is not a pauseless GC. In the documentation it says "C4 uses a 4-stage concurrent execution mechanism that eliminates almost all stop-the-world pauses."
Except the documentation says
> C4 differentiates itself from other generational garbage collectors by supporting simultaneous-generational con- currency: the different generations are collected using concurrent (non stop-the-world) mechanisms
It doesn't matter at all. C4 uses STW.
I wish I could have learned more from this interaction than it doesn't matter.
Java's ZGC claims O(1) pause time of 0.05ms.
(As with any low-pause collector, the rest of your code is uniformly slower by some percentage because it has to make sure not to step on the toes of the concurrently-running collector.)
> Java's ZGC claims O(1) pause time of 0.05ms
In practice it's actually closer to 10ms for large heaps. Large being around 220 GB.
With Java, the issue is that each allocated object carries significant memory footprint, as result total memory consumption is much higher compared to C++: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
The benchmarks game shows memory use with default GC settings (as a way to uncover space-time tradeoffs), mostly for tiny tiny programs that hardly use memory.
Less difference — mandelbrot, k-nucleotide, reverse-complement, regex-redux — when the task requires memory to be used.
Less with GraalVM native-image:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
> Less difference — mandelbrot, k-nucleotide, reverse-complement, regex-redux — when the task requires memory to be used.
yes, I referred to benchmarks with large memory consumption, where Java still uses from 2 to 10(as in binary tree task) more memory, which is large overhead.
That’s fair, no resource is unlimited. My point is that memory is usually the least of one’s problem, even on average machines. Productivity and CPU usage tend to be the bottleneck as a developer and a user. GC is mostly a performance problem rather than a memory one, and well-designed language can minimize the impact of it. (I am working on a message-passing language, and only allowing GC after a reply greatly simplifies the design and performance characteristics)
>My point is that memory is usually the least of one’s problem, even on average machines.
The average machine a person directly interacts with is a phone or TV at this point, both of which have major BoM restrictions and high pixel density displays. Memory is the primary determination of performance in such environments.
On desktops and servers, CPU performance is bottlenecked on memory - garbage collection isn't necessarily a problem there, but the nature of separate allocations and pointer chasing is.
On battery, garbage collection costs significant power and so it gets deferred (at least for full collections) until it's unavoidable. In practice this means that a large amount of heap space is "dead", which costs memory.
Your language sounds interesting - I've always thought that it would be cool to have a language where generational GC was exposed to the programmer. If you have a server, you can have one new generation arena per request with a write barrier for incoming references from the old generation to the new. Then you could perform young GC after every request, only paying for traversal+move of objects that survived.
eh, there are GC languages famous for high uptimes and deployed in places where it "basically runs forever with no intervention", so in practice with the right GC and application scope, "deferring the concern till the heat death of the universe" (or until a CVE forces a soft update) is possible.
That's exactly why I said "it is also quite likely that your program will never grow to that point". Of course you need non-trivial knowledge to determine whether your application and GC satisfy that criteria.
>Even AAA games these days happily run on GC languages.
Which games are these? Are you referring to games written in Unity where the game logic is scripted in C#? Or are you referring to Minecraft Java Edition?
I seriously doubt you would get close to the same performance in a modern AAA title running in a Java/C# based engine.
Unreal Engine has a GC.
You're right that there is a difference between "engine written largely in C++ and some parts are GC'd" vs "game written in Java/C#", but it's certainly not unheard of to use a GC in games, pervasively in simpler ones (Heck, Balatro is written in Lua!) and sparingly in even more advanced titles.
Thanks for the Rust book!
You're welcome!
I think Balatro uses the Love2d engine which is in C/C++.
Sure, but you write games in it in Lua. That Love2d is implemented in C++ (GitHub says like 80% C++ and 10% C) doesn't mean that you're writing the game in it. In my understanding, Love2d uses reference counting (which is still GC) for its own stuff, and integrates those into Lua's tracing GC.
Unreal Engine has a C++-based GC.
https://dev.epicgames.com/documentation/en-us/unreal-engine/...
C#? Maybe. Java? Less likely.
> Even AAA games these days happily run on GC languages.
You can recognize them by their poor performance.
This is exactly the attitude this blog post spends its first section pretty passionately railing against.
Well, 1) the temporary allocator strategy; and 2) `defer` kinda go against the spirit of this observation.
With (1) you get the benefits of GC with, in many cases, a single line of code. This handles a lot of use cases. Of those it doesn't, `defer` is that "other single line".
I think the issue being raised is the "convenience payoff for the syntax/semantics burden". The payoff for temp-alloc and defer is enormous: you make the memory management explicit so you can easily see-and-reason-about the code; and it's a trivial amount of code.
There feels something deeply wrong with RAII-style langauges.. you're having the burden to reason about implicit behaviour, all the while this behaviour saves you nothing. It's the worst of both worlds: hiddenness and burdensomeness.
Neither of those gives memory safety, which is what the parent comment is about. If you release the temporary allocator while a pointer to some data is live, you get use after free. If you defer freeing a resource, and a pointer to the resource lives on after the scope exit, you get use after free.
The dialetic beings with OP, and has pcw's reply and then mine. It does not begin with pcw's comment. The OP complains about rust not because they imagine Jai is memory safe, but because they feel the rewards of its approach significantly outweight the costs of Rust.
pcw's comment was about tradeoffs programmers are willing to make -- and paints the picture more black-and-white than the reality; and more black and white than OP.
While technically true, it still simplifies memory management a lot. The tradeoff in fact is good enough that I would pick that over a borrowchecker.
I don't understand this take at all. The borrow checker is automatic and works across all variables. Defer et al requires you remember to use it, and use it correctly. It takes more effort to use defer correctly whereas Rust's borrow checker works for you without needing to do much extra at all! What am I missing?
What you're missing is that Rust's borrowing rules are not the definition of memory safety. They are just one particular approach that works, but with tradeoffs.
Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't.
That rule prevents memory corruption, but it outlaws many programs that break the rule yet actually are otherwise memory safe, and it also outlaws programs that follow the rule but wherein the compiler isn't smart enough to prove that the rule is being followed. That annoyance is the main thing people are talking about when they say they are "fighting the borrow checker" (when comparing Rust with languages like Odin/Zig/Jai).
That is true of `&mut T`, but `&mut T` is not the only way to do mutation in Rust. The set of possible safe patterns gets much wider when you include `&Cell<T>`. For example see this language that uses its equivalent of `&Cell<T>` as the primary mutable reference type, and uses its equivalent of `&mut T` more sparingly: https://antelang.org/blog/safe_shared_mutability/
Rust doesn't outlaw them. It just forces you to document where safety workarounds are used.
> The borrow checker is automatic and works across all variables.
Not that I'm such a Rust hater, but this is also a simplification of the reality. The term "fighting the borrow checker" is these days a pretty normal saying, and it implies that the borrow checker may be automatic, but 90% of its work is telling you: no, try again. That is hardly "without needing to do much extra at all".
That's what you're missing.
What's hilarious about "fighting the borrow checker" is that it's about the lexical lifetime borrow checking, which went away many years ago - fixing that is what "Non-lexical lifetimes" is about, which if you picked up Rust in the last like 4-5 years you won't even know was a thing. In that era you actually did need to "fight" to get obviously correct code to compile because the checking is only looking at the lexical structure.
Because this phrase existed, it became the thing people latch onto as a complaint, often even when there is no borrowck problem with what they were writing.
Yes of course when you make lifetime mistakes the borrowck means you have to fix them. It's true that in a sense in a GC language you don't have to fix them (although the consequences can be pretty nasty if you don't) because the GC will handle it - and that in a language like Jai you can just endure the weird crashes (but remember this article, the weird crashes aren't "Undefined Behaviour" apparently, even though that's exactly what they are)
As a Rust programmer I'm comfortable with the statement that it's "without needing to do much extra at all".
I appreciate what you're saying, though isn't undefined behavior having to do with the semantics of execution as specified by the language? Most languages outright decline to specify multiple threads of execution, and instead provide it as a library. I think C started that trend. I'm not sure if Jai even has a spec, but the behavior you're describing could very well be "unspecified" not "undefined" and that's a distinction some folks care about.
This being said, yes Rust is useful to verify those scenarios because it _does_ specify them, and despite his brash takes on Rust, Jon admits its utility in this regard from time to time.
> the behavior you're describing could very well be "unspecified" not "undefined" and that's a distinction some folks care about.
Nah, it's going to be Undefined. What's going on here is that there's an optimising compiler, and the way compiler optimisation works is you Define some but not all behaviour in your language and the optimiser is allowed to make any transformations which keep the behaviour you Defined.
Jai uses LLVM so in many cases the UB is exactly the same as you'd see in Clang since that's also using LLVM. For example Jai can explicitly choose not to initialize a variable (unlike C++ 23 and earlier this isn't the default for the primitive types, but it is still possible) - in LLVM I believe this means the uninitialized variable is poison. Exactly the same awful surprises result.
Your reasoning appears to be:
1. because it is the kind of optimizing compiler you say it is
2. because it uses LLVM
… there will be undefined behavior.
Unless you worked on Jai, you can’t support point 1. I’m not even sure if you’re right under that presumption, either.
> because it is the kind of optimizing compiler you say it is
What other kind of optimisations are you imagining? I'm not talking about a particular "kind" of optimisation but the entire category. Lets look at two real world optimisations from opposite ends of the scale to see:
1. Peephole removal of null sequences. This is a very easy optimisation, if we're going to do X and then do opposite-of-X we can do neither and have the same outcome which is typically smaller and faster. For example on a simple stack machine pushing register R10 and then popping R10 achieves nothing, so we can remove both of these steps from the resulting program.
BUT if we've defined everything this can't work because it means we're no longer touching the stack here, so a language will often not define such things at all (e.g. not even mentioning the existence of a "stack") and thus permit this optimisation.
2. Idiom recognition of population count. The compiler can analyse some function you've written and conclude that it's actually trying to count all the set bits in a value, but many modern CPUs have a dedicated instruction for that, so, the compiler can simply emit that CPU instruction where you call your function.
BUT You wrote this whole complicated function, if we've defined everything then all the fine details of your function must be reproduced, there must be a function call, maybe you make some temporary accumulator, you test and increment in a loop -- all defined, so such an optimisation would be impossible.
>In that era you actually did need to "fight" to get obviously correct code to compile because the checking is only looking at the lexical structure.
NLL's final implementation (Polonius) hasn't landed yet, and many of the original cases that NLL were meant to allow still don't compile. This doesn't come up very often in practice, but it sure sounds like a hole in your argument.
What does come up in practice is partial borrowing errors. It's one of the most common complaints among Rust programmers, and it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
> What does come up in practice is partial borrowing errors.
For some people. For example, I personally have never had a partial borrowing error.
> it definitely qualifies as having to fight/refactor to get obviously correct code to compile.
This is not for sure. That is, while it's code that could work, it's not obviously clear that it's correct. Rust cares a lot about the contract of function signatures, and partial borrows violate the signature, that's why they're not allowed. Some people want to relax that restriction. I personally think it's a bad idea.
> Rust cares a lot about the contract of function signatures, and partial borrows violate the signature
People want to be able to specify partial borrowing in the signatures. There have been several proposals for this. But so far nothing has made it into the language.
Just to give an example of where I've run into countless partial borrowing problems: Writing a Vulkan program. The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state. (of course, this is not safe, because you could have accidental mutable aliasing).
But in Rust, that just doesn't work. You get countless errors like "Can't call self.resize_framebuffer() because you've already borrowed self.grass_texture" (even though resize_framebuffer would never touch the grass texture), "Can't call self.upload_geometry() because you've already borrowed self.window.width", and so on.
So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
It would be so much nicer if you could instead annotate that resize_framebuffer only borrows self.framebuffer, and no other part of self.
> People want to be able to specify partial borrowing in the signatures.
That's correct. That's why I said "Some people want to relax that restriction. I personally think it's a bad idea."
> The usual pattern in C++ etc is to just have a giant "GrahpicsState" struct that contains all the data you need. Then you just pass a reference to that to any function that needs any state.
Yes, I think that this style of programming is not good, because it creates giant balls of aliasing state. I understand that if the library you use requires you to do this, you're sorta SOL, but in the programs I write, I've never been required to do this.
> So instead you end up with 30 functions that each take 20 parameters and return 5 values, and most of the code is shuffling around function arguments
Yes, this is the downstream effects of designing APIs this way. Breaking them up into smaller chunks of state makes it significantly more pleasant.
I am not sure that it's a good idea to change the language to make using poorly designed APIs easier. I also understand that reasonable people differ on this issue.
>Yes, this is the downstream effects of designing APIs this way. Breaking them up into smaller chunks of state makes it significantly more pleasant.
What they're describing is the downstream effect of not designing APIs that way. If you could have a single giant GraphicsState and define everything as a method on it, you would have to pass around barely any arguments at all: everything would be reachable from the &mut self reference. And either with some annotations or with just a tiny bit of non-local analysis, the compiler would still be able to ensure non-aliasing usage.
"functions that each take 20 parameters and return 5 values" is what you're forced to write in alternative to that, to avoid partial borrowing errors: for example, instead of a self.resize_framebuffer() method, a free function resize_framebuffer(&mut self.framebuffer, &mut self.size, &mut self.several_other_pieces_of_self, &mut self.borrowed_one_by_one).
I agree that the severity of this issue is highly dependent on what you're building, but sometimes you really do have a big ball of mutable state and there's not much you can do about it.
Maybe I'm spoiled because I work with Rust primarily these days but "fighting the borrow checker" isn't really common once you get it.
A lot has been written about this already, but again I think you're simplifying here by saying "once you get it". There's a bunch of options here for what's happening:
1. The borrow checker is indeed a free lunch 2. Your domain lends itself well to Rust, other domains don't 3. Your code is more complicated than it would be in other languages to please the borrow checker, but you are unaware because its just the natural process of writing code in Rust.
There's probably more things that could be going on, but I think this is clear.
I certainly doubt its #1, given the high volume of very intelligent people that have negative experiences with the borrow checker.
"But after an initial learning hump, I don't fight the borrow checker anymore" is quite common and widely understood.
Just like any programming paradigm, it takes time to get used to, and that time varies between people. And just like any programming paradigm, some people end up not liking it.
That doesn't mean it's a "free lunch."
I'm not sure what you mean here, since in different replies to this same thread you've already encountered someone who is, by virtue of Rusts borrow checker design, forced to change his code in a way that is, to that person, net negative.
Again this person has no trouble understanding the BC, it has trouble with the outcome of satisfying the BC. Also this person is writing Vulkan code, so intelligence is not a problem.
> is quite common and widely understood
This is an opinion expressed in a bubble, which does not in any-way disprove that the reverse is also expressed in another bubble.
"common" does not mean "every single person feels that way" in the same sense that one person wanting to change their code in a way they don't like doesn't mean that every single person writing Rust feels the way that they do.
If your use case can be split into phases you can just allocate memory from an arena, copy out whatever needs to survive the phase at the end and free all the memory at once. That takes care of 90%+ of all allocations I ever need to do in my work.
For the rest you need more granular manual memory management, and defer is just a convenience in that case compared to C.
I can have graphs with pointers all over the place during the phase, I don't have to explain anything to a borrow checker, and it's safe as long as you are careful at the phase boundaries.
Note that I almost never have things that need to survive a phase boundary, so in practice the borrow checker is just a nuissance in my work.
There other use cases where this doesn't apply, so I'm not "anti borrow checker", but it's a tool, and I don't need it most of the time.
You can explain this sort of pattern to the borrow checker quite trivially: slap a single `'arena` lifetime on all the references that point to something in that arena. This pattern is used all over the place, including rustc itself.
(To be clear I agree that this is an easy pattern to write correctly without a borrow checker as well. It's just not a good example of something that's any harder to do in Rust, either.)
I remember having multiple issues doing this in rust, but can't recall the details. Are you sure I would just be able to have whatever refs I want and use them without the borrow checker complaining about things that are actually perfectly safe? I don't remember that being the case.
Edit: reading wavemode comment above "Namely, in Rust it is undefined behavior for multiple mutable references to the same data to exist, ever. And it is also not enough for your program to not create multiple mut - the compiler also has to be able to prove that it can't." that I think was at least one of the problems I had.
The main issue with using arenas in Rust right now is that the standard library collections use the still-unstable allocator API, so you cannot use those with them. However, this is a systems language, so you can use whatever you want for your own data structures.
> reading wavemode comment above
This is true for `&mut T` but that isn't directly related to arenas. Furthermore, you can have multiple mutable aliased references, but you need to not use `&mut T` while doing so: you can take advantage of some form of internal mutability and use `&T`, for example. What is needed depends on the circumstances.
wavemode's comment only applies to `&mut T`. You do not have to use `&mut T` to form the reference graph in your arena, which indeed would be unlikely to work out.
Not sure about the implicit behavior. In C++, you can write a lot of code using vector and map that would require manual memory management in C. It's as if the heap wasn't there.
Feels like there is a beneficial property in there.
> Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you.
This is true but there is a middle ground. You use a reasonably fast (i.e. compiled) GC lang, and write your own allocator(s) inside of it for performance-critical stuff.
Ironically, this is usually the right pattern even in non-GC langs: you typically want to minimize unnecessary allocations during runtime, and leverage techniques like object pooling to do that.
IOW I don't think raw performance is a good argument for not using GC (e.g. gamedev or scientific computing).
Not being able to afford the GC runtime overhead is a good argument (e.g. embedded programs, HFT).
It's difficult to design a language which has good usability both with and without a GC. Can users create a reference which points to the interior of an object? Does the standard library allocate? Can the language implement useful features like move semantics and destructors, when GCed objects have an indefinite lifetime?
You'd almost end up with two languages in one. It would be interesting to see a language fully embrace that, with fast/slow language dialects which have very good interoperability. The complexity cost would be high, but if the alternative is learning two languages rather than one...
I'm not saying you design a language with an optional GC, I'm saying the user can implement their own allocators i.e. large object pools nested in the GC-managed memory system. And then they get to avoid most of the allocation and deallocation overhead during runtime.
Sorry, I wasn't very clear - I think that using an object pool in a GCed language is like writing code in a dialect of that language which has no allocator.
Sure, but how is that any different than what you'd have to do in a regular GC-less lang to achieve good (allocation-avoiding) performance.
>as impossible as finding two even numbers whose sum is odd.
That is a great line worth remembering.
Stolen from one of the mathematician Underwood Dudley's essays on cranks :)
The biggest "crime" of Jai is that it (soft-)launched like an open source programming language and didn't actually become open source shortly. There are so many programming languages that did go through the "beta" period and still remain open sourced all the time. Open source doesn't imply open governance, and most such languages are still evolved almost solely with original authors' judgements. It is fine for Jai to remain closed of course, but there is no practical reason for Jai to remain closed to this day. The resulting confusion is large enough to dismiss Jai at this stage.
Same story with the Mojo language, unfortunately.
To me this raises the question of whether this is a growing trend, or whether it's simply that languages staying closed source tends to be a death sentence for them in the long term.
Yep. Same dilemma as Star Citizen. If both just threw their hands up and said, "Done!", today then everyone would agree that a great product had been released and everyone would be mostly pleased. Instead, development has dragged on so long as to cast doubts over the goodwill of the founders. Now, Jai is unusable because it's difficult to trust Blow if he's willing to lie about that and Star Citizen is unplayable because the game was clearly released under false pretenses.
> because most Javascript programmers are entirely unaware of the memory allocation cost associated with each call to anonymous functions
How does calling an anonymous function in JS cause memory allocations?
I also found this comment a bit strange. I'm not aware of a situation where this occurs, though he might be conflating creating an anonymous function with calling it.
> he might be conflating creating an anonymous function with calling it.
Yeah, that's what I figured. I don't know JS internals all too well, so I thought he might be hinting at some unexpected JS runtime quirk.
Probably misspoke, returning or passing anonymous functions cause allocations for the closures, then calling them causes probably 4 or 5 levels of pointer chasing to get the data that got (invisibly) closed over
I don't think there is much pointer chasing at runtime. With lexically scoped closures it's only the compiler who walks the stack frames to find the referenced variable; the compiled function can point directly to the object in the stack frame. In my understanding, closed over variables have (almost) no runtime cost over "normal" local variables. Please correct me if I'm wrong.
I meant more like storing closures to be used later after any locals are out of the stack frame, but tbh that's an abstraction that also causes allocations in C++ and Rust. On the other hand, no idea how JS internals work but I know in python getting the length of an array takes five layers of pointer indirection so it very well could be pointer to closure object -> pointer to list of closed variables -> pointer to boxed variable -> pointer to number or some ridiculous thing like that.
In C++, lambda functions don't require dynamic memory allocations, only type erasure via std::function does (if the capture list is too large for small-functiom-optimization)
However, C++ lambdas don't keep the parent evironment alive, so if you capture a local variable by reference and call the lambda outside the original function environment, you have a dangling reference and get a crash.
JavaScript is almost always JIT’ed and Python is usually not, so I wouldn’t rely on your Python intuition when talking about JavaScript performance. Especially when you’re using it to suggest that JavaScript programmers don’t understand the performance characteristics of their code.
I didn't know much about Jai, and started reading it, and it really has (according to the article) some exciting features, but this caught my eye:
"... Much like how object oriented programs carry around a this pointer all over the place when working with objects, in Jai, each thread carries around a context stack, which keeps track of some cross-functional stuff, like which is the default memory allocator to ..."
It reminds me of GoLang's context, and it should've existed in any language dealing with multi-threading, as a way of carrying info about parent thread/process (and tokens) for trace propagation, etc.
The Odin programming language uses an implicit context pointer like Jai, and is freely available and open source.
Thanks! I should check it out!
I sincerely dread that by the time jblow releases jai, people will just have moved on to zig or rust, and that it will just become irrelevant.
I'm sure jblow is having the same fears, and I hope to be wrong.
Still, it's fun to be remembering the first few videos about "hey, I have those ideas for a language". Great that he could afford to work on it.
Sometimes, mandalas are what we need.
I think the Zig community should worry more about when Jai is released. Esp given that Zig isn’t anywhere near 1.0 yet, and feature wise it’s a rather sparse language that has first mover advantage over alternatives, but as a language it actually doesn’t bring many new tools AND is bad with IDEs.
Jai similarly is hard for IDEs, but has much more depth and power.
While Zig has momentum, it will need to solidify it to become mainstream, or Jai has a good chance of disrupting Zig’s popularity. Basically Zig is Jai but minus A LOT of features, while being more verbose and annoyingly strict about things.
Odin on the other hand has no compile time and in general has different solutions compared to Zig & Jai with its rich set of builtin types and IDE friendliness.
And finally C3 which is for people who want the familiarity of C with improvement but still IDE friendliness with limited metaprogramming. This language is also less of an overlap with Jai than Zig is.
Amazingly, in agreement with most of this. The sooner that Jai is released, the more likely it will be a Zig popularity disruptor. And Zig is definitely vulnerable. It has a large amount of issues, is dealing with trying to maintain being a limited/simple language, and still far from 1.0.
Regardless of comptime, Odin and C3's public accessibility, and being close enough to Jai for folks to contemplate switching over, will eat at its public mind share. In both cases (be it Zig or Odin/C3), the longer that Jai keeps making the mistake of avoiding a full public release, the more it appears to be hurting itself. In fact, many would argue, that a bit of Jai's "shine" has already worn off. There are now many alternative languages out here, that have already been heavily influenced by it.
Why do you dread this? It’s entirely possible that Jai is not good enough to compete, no?
That would be the "best case" scenario (an inferior language beaten by better ones.)
But, no, the hubris of the language creator, whose arrogance is probably close to a few nano-Dijkstras, makes it entirely possible that he prefers _not_ releasing a superior language, out of spite for the untermenschen that would "desecrate" it by writing web servers inside it.
So I'm pretty convinced now that he will just never release it except to a couple of happy fews, and then he will die of cardio vascular diseases because he spent too much time sitting in a chair streaming nonsense, and the world will have missed an opportunity.
Then again, I'm just sad.
As John Stewart said: "on the bright side, I'm told that at some point the sun will turn supernova, and we'll all die."
> Software has been getting slower at a rate roughly equivalent to the rate at which computers are getting faster.
Cite?
This problem statement is also such a weird introduction to specifically this new programming language. Yes, compiled languages with no GC are faster than the alternatives. But the problem is and was not the alternatives. Those alternatives fill the vast majority of computing uses and work well enough.
The problem is compiled languages with no GC, before Rust, were bug prone, and difficult to use safely.
So -- why are we talking about this? Because jblow won't stop catastrophizing. He has led a generation of impressionable programmers to believe that we in some dark age of software, when that statement couldn't be further from the truth.
I carefully watched a number of the early jai language YouTube videos. Some of his opinions on non-programming topics are just goofy: I recall him ranting (and I wish I could find it again,) about the supposed pointlessness of logarithmic scales (decibels, etc.,) vs scientific notation and experiencing a pretty bad cringe spasm.
jblow's words are not the Gospel on high.
> He has led a generation of impressionable programmers to believe that we in some dark age of software, when that statement couldn't be further from the truth.
Have you actually used modern software?
There's a great rant about Visual Studio debugger which in recent versions cannot even update debugged values as you step through the program unlike its predecessors: https://youtu.be/GC-0tCy4P1U?si=t6BsHkHhoRF46mYM
And this is professional software. There's state of personal software is worse. Most programs cannot show a page of text with a few images without consuming gigabytes of RAM and not-insignificant percentages of CPU.
> Have you actually used modern software?
Uh, yes. When was software better (like when was America great)? Do you remember what Windows and Linux and MacOS were like in 90s? What exactly is the software we are comparing?
> There's a great rant about Visual Studio debugger
Yeah, I'm not sure these are "great rants" as you say. Most are about how software with different constraints than video games aren't made with same constraints as video games. Can you believe it?
I am told that in Visual Studio 2008, you could debug line by line, and it was smooth. Like there was zero lag. Then Microsoft rewrite VS from c++ into c# and it became much slower
Modern software is indeed slow especially when you consider how fast modern hardware is.
If you want to feel the difference, try highly optimised software against a popular one. For eg: linux vs windows, windows explorer vs filepilot, zed vs vscode.
> I am told that in Visual Studio 2008, you could debug line by line, and it was smooth. Like there was zero lag. Then Microsoft rewrite VS from c++ into c# and it became much slower
Not exactly a surprise? Microsoft made a choice to move to C# and the code was slower? Says precious little about software in general and much more about the constraints of modern development.
> If you want to feel the difference, try highly optimised software against a popular one. For eg: linux vs windows, windows explorer vs filepilot, zed vs vscode.
This reasoning is bonkers. Compare vastly different software with a vastly different design center to something only in the same vague class of systems?
If the question is "Is software getting worse or better?", doesn't it make more sense to compare newer software to the same old software? Again -- do you remember what Windows and Linux and MacOS were like in 90s? Do you not believe they have improved?
I have used windows since 20 years. I distinctly recall it becoming slower and painful over time despite using more powerful hardware.
But hey that could be nostalgia, right? We can't run win xp in today's world. Not is it recommend with lots of software ot being supported on win xp.
The same is case for Android. Android 4 has decent performance, then android 5 came and single handedly reduced performance and battery life. And again you can't go back due to newer apps no longer supporting old android version.
This is also seen with apple where newer os version is painful on older devices.
So, on what basis do you fairly say that "modern apps are slow"? That's why I say to use faster software as reference. I have linux and windows dual boot on same machine. An dthen difference in performance is night and day
> So, on what basis do you fairly say that "modern apps are slow"? That's why I say to use faster software as reference. I have linux and windows dual boot on same machine. An dthen difference in performance is night and day
Then you're not comparing old and new software. You're comparing apples and oranges. Neovim is comparable to VS Code in only the most superficial terms.
> Neovim is comparable to VS Code in only the most superficial terms.
Oh no. It can be compared in more than superficial terms. E.g. their team struggled to create a performant terminal in VS Code. Because the tech they chose (and the tech a lot of the world is using) is incapable of outputting text to the screen fast enough. Where "fast enough" is "with minimal acceptable speed which is still hundreds of times slower than a modern machine is capable of": https://code.visualstudio.com/blogs/2017/10/03/terminal-rend...
> E.g. their team struggled to create a performant terminal in VS Code.
WTF are you talking about? Neovim doesn't implement a terminal?
"our computers are thousands of times faster and more powerful than computers from the 90s and early 2000s, so of course it makes sense that 'constraints of development' make it impossible to make a working debugger on a modern supercomputer due to ... reasons. Doesn't mean this applies to all software ... which is written by same developers in same conditions on same machines in same languages for same OSes"
> so of course it makes sense that 'constraints of development' make it impossible to make a working debugger
All of these examples are Microsoft is not building X as well as it used to, which is entirely possible. However, Microsoft choosing to move languages says something entirely different to me than simply -- software somehow got worse. It says to me that devs weren't using C++ effectively. It says to me that a tradeoff was made re: raw performance for more flexibility and features. No one sets out to make slow software. Microsoft made a choice. At least think about why that might be.
> It says to me that a tradeoff was made re: raw performance for more flexibility and features.
It says that "our computers are thousands of times faster and more powerful than computers from the 90s and early 2000s" and yet somehow "flexibility and features" destroy all of those advancements.
And no, it's not just Microsoft.
> Do you remember what Windows and Linux and MacOS were like in 90s? What exactly is the software we are comparing?
Yes, yes I do.
Since then the computer have become several orders of magnitude more powerful. You cannot even begin to imagine how fast and powerful our machines are.
And yet nearly everything is barely capable of minimally functioning. Everything is riddled with loading screens, lost inputs, freeze frames and janky scrolling etc. etc. Even OS-level and professional software.
I now have a AMD Ryzen 9 9950X3D CPU, GeForce RTX 5090 GPU, DDR5 6000MHz RAM and M.2 NVME disks. I should not even see any loading screen, or any operation taking longer than a second. And yet even Explorer manages to spend seconds before showing contents of some directories.
Hard to see how this will compete with Zig.
Zig comes with a mountain of friction and ceremony, imo. It does a lot of things well, but for game dev I'd take Jai or Odin every time.
I can see the appeal if there is a need for stronger metaprogramming. Not that Zig is terrible in this area, it is just that Jon's language is much more powerful in that area at this stage.
That being said, I do see an issue with globally scoped imports. It would be nice to know if imports can be locally scoped into a namespace or struct.
In all, whether it's compete or coexist (I don't believe the compiler for Jon's language can handle other languages so you might use Zig to compile any C or C++ or Zig), it will be nice to see another programming language garner some attention and hopefully quell the hype of others.
Well, they promise to release a full commercial game engine alongside it, so that might help :)
Hard to see this compare at all without examples, but I'll stay patient
I would have liked to see more code examples in the article.
Yeah, I see a bunch of praise and references to unfamiliar features/syntax and it makes it hard to follow and to form an opinion about.
Honestly the tone of the article was so smug and condescending that I couldn’t finish it.
wow... I did not get that _at all_ ; opinionated maybe, do I have to share all these opinions to the degree to which they've been expressed? No, but condescending? To whom? To duck typed languages?
It's condescending to the people who've noticed they make mistakes and so value a language which is designed accordingly:
"So, put simply, yes, you can shoot yourself in the foot, and the caliber is enormous. But you’re being treated like an adult the whole time"
That is, those of us who've noticed we make mistakes aren't adults we're children and this is a proper grown-up language -- pretty much the definition of condescending.
I can't tell if you're joking or not, but if you aren't, no one is calling you a child. The article is obviously saying that the compiler doesn't stop you from doing dumb things, which is a privilege generally only extended to adults. Nobody is saying anyone who makes mistakes is a child.
If you feel this article is smug and condescending, don't start watching the language designer's stream too soon.
The least you can say is that he is _opinionated_. Even his friend Casey Muratori is "friendly" in comparison, at least trying to publish courses to elevate us masses of unworthy typescript coders to the higher planes of programming.
Jblow just want you to feel dumb for not programming right. He's unforgiving, Socrate's style.
The worst thing is : he might be right, most of the time.
We would not know, cause we find him infuriating, and, to be honest, we're just too dumb.
In my experience programming is more about formalising the domain of the problem than it is about shuffling bits around. Take a minute more than need and you'll lose hundreds. Get the answer wrong? Lose millions. Domains where you deprioritise correctness for speed just... don't seem that interesting too me. No need to look down on memory managed languages. Personally, Haskell and APL impressive me more, but I don't have shit on the author for being stuck in an imperative paradigm.
Wonder when will Jai open beta happen
If all goes according to plan, late this year. https://youtu.be/jamU6SQBtxk?si=hDbwZQX2MtFiwun8
odin has replaced the need for jai, and even then, i'm not sure there is a need for "yet another sugar for LLVM"
they show an adoration for C, and they both hate C++, yet they chose C++ for their compiler, go figure
I am not interested. I am just trying to code with C3 and make some binding with another language like C and Zig, it is quiet easy and fun. I think it's enough for me to learn kinds of language than using jai that never released the compiler to the public till now.