DanRosenwasser 4 days ago

Hi folks, Daniel Rosenwasser from the TypeScript team here. We're obviously very excited to announce this! RyanCavanaugh (our dev lead) and I are around to answer any quick questions you might have. You can also tune in to the Discord AMA mentioned in the blog this upcoming Thursday.

  • spankalee 4 days ago

    Hey Daniel.

    I write a lot of tools that depend on the TypeScript compiler API, and they run in a lot of a lot of JS environments including Node and the browser. The current CJS codebase is even a little tricky to load into standard JS module supporting environments like browsers, so I've been _really_ looking forward to what Jake and others have said will be an upcoming standard modules based version.

    Is that still happening, and how will the native compiler be distributed for us tools authors? I presume WASM? Will the compiler API be compatible? Transforms, the AST, LanguageService, Program, SourceFile, Checker, etc.?

    I'm quite concerned that the migration path for tools could be extremely difficult.

    [edit] To add to this as I think about it: I maintain libraries that build on top of the TS API, and are then in turn used by other libraries that still access the TS APIs. Things like framework static analysis, then used by various linters, compilers, etc. Some linters are integrated with eslint via typescript-eslint. So the dependency chain is somewhat deep and wide.

    Is the path forward going to be that just the TS compiler has a JS interop layer and the rest stays the same, or are all TS ecosystem tools going to have to port to Go to run well?

    • ciarcode 4 days ago

      I think they answered in their FAQ here: https://github.com/microsoft/typescript-go/discussions/455#d....

      If I got it correctly, they created a node native module that allows synchronous communication on standard I/O between external processes.

      So, this node module will make possible the communication between the typescript compiler GO process, that will expose an “API server compiler”, and a client side JavaScript process.

      They don’t think it will be possible to port all APIs and some/most of them will be different than today.

      • spankalee 3 days ago

        I really wonder how tough that is going to be to migrate to.

        I have API use that falls into a few categories, that aren' just LSP-ish type cases:

        - Transforms, which presumably there has to be some solution for, even if it's porting to Go.

        - Linters, which integrate with typescript-eslint and need the type-checker.

        - Codemods, which create and modify AST nodes and re-emit them.

        - Static analyzers, which build us app-specific models of the code and rely on AST traversal and the type-checker.

        - Analyzer libraries that offer tools to other libraries and apps that expose the TypeScript AST and functions that operate on AST nodes.

        Traversing the AST over IPC is going to be too chatty, so I presume there will have to be some sort of way to get a whole SourceFile in one call, but then I wonder about traversal. You'll need a visitor library on your side of the IPC at least, but that's simple. But then you also need all the predicates. You don't want to be calling ts.isTemplateExpression() on every node via IPC.

        And I do all this stuff in web workers too, so whatever this IPC is has to work there.

        • ciarcode 3 days ago

          In the post, they specifically talk about two points that seems to address some of your doubts

          1. “We expect to have a more curated API that is informed by critical use-cases (e.g. linting, transforms, resolution behavior, language service embedding, etc.).”

          2. “We also can imagine opportunities to optimize, use other underlying IPC strategies, and provide batch-style APIs to minimize call overhead.”

          Anyway, I’ve used the compiler API a lot too, and I really enjoy its huge capabilities, making possible practically everything on the source code (EDIT: and hijack the build process too). I hope we won’t miss too much.

    • lytedev 4 days ago

      Reading the article, it looks like they are writing go, so will probably be distributing go binaries.

      • maxloh 4 days ago

        Maybe they'll also be distributed in WASM too, which is easier to be integrated with JavaScript codebases.

        • psd1 3 days ago

          Do a n00b a favour... would you ever run wasm outside of a client browser? Are you suggesting that wasm is a viable platform for local services or commands?

          Or do you mean that there's a use case for a compilation in the browser?

        • nine_k 4 days ago

          Would running WASM be any faster than running JS in V8?

          • airforce1 4 days ago

            In my experience it is pretty difficult to make WASM faster than JS unless your JS is really crappy and inefficient to begin with. LLVM-generated WASM is your best bet to surpass vanilla JS, but even then it's not a guarantee, especially when you add js interop overhead in. It sort of depends on the specific thing you are doing.

            I've found that as of 2025, Go's WASM generator isn't as good as LLVM and it has been very difficult for me to even get parity with vanilla JS performance. There is supposedly a way to use a subset of go with llvm for faster wasm, but I haven't tried it (https://tinygo.org/).

            I'm hoping that Microsoft might eventually use some of their wasm chops to improve GO's native wasm compiler. Their .NET wasm compiler is pretty darn good, especially if you enable AOT.

            • zozbot234 4 days ago

              I think the Wasm backends for both Golang and LLVM have yet to support the Wasm GC extension, which would likely be needed for anything like real parity with JS. The present approach is effectively including a full GC implementation alongside your actual Golang code and running that within the Wasm linear memory array, which is not a very sensible approach.

              • mappu 4 days ago

                The major roadblocks for WasmGC in Golang at the moment are (A) Go expects a non-moving GC which WasmGC is not obligated to provide; and (B) WasmGC does not support interior pointers, which Go requires.

                https://github.com/golang/go/issues/63904#issuecomment-22536...

                • zozbot234 4 days ago

                  These are no different than the issues you'd have in any language that compiles to WasmGC, because the new GC'd types are (AIUI) completely unrelated to the linear "heap" of ordinary WASM - they are pointed to via separate "reference" types that are not 'pointers' as normally understood. That whole part of the backend has to be reworked anyway, no matter what your source language is.

                  • mappu 4 days ago

                    Go exposes raw pointers to the programmer, so from your description i think those semantics are too rudimentary to implement Go's semantics, there would need to be a WasmGC 2.0 to make this work.

                    It sounds like it would be a great fit for e.g. Lua though.

                    • zozbot234 4 days ago

                      I don't think Go supports any pointer arithmetic out-of-the-box? What it has in the base language is effectively references.

                      • yencabulator 2 days ago

                        You can get a pointer inside a struct ("interior pointers") without pointer arithmetic.

                      • pjmlp 3 days ago

                        It does, via unsafe package, yes it does look ugly, that is on purpose.

                            item := *(*int)(unsafe.Pointer(uintptr(start) + size*uintptr(i)))
                        
                        A random example taken from Internet.
                        • zozbot234 3 days ago

                          That's not the base language, it's an unsafe superset. There's no reason why a Wasm-GC backend for Golang should be expected to support that by default.

                          • pjmlp 3 days ago

                            If it is part of the language reference it is part of the language.

                            Usually when language reference books used to be printed, or we used ISO languages, what is there on paper, is the language.

                            We are only discussing semantics, if it is hardcoded primitives, or made available via the standard library, specially in case of blessed packages like unsafe which aren't fully implemented, rather magical types for the compiler.

                            Hence why the only thing you will see here is mostly documentation, https://github.com/golang/go/blob/master/src/unsafe/unsafe.g...

                            Which is nothing new since the 1960's that there are systems languages with some way to mark code unsafe, the C linage of languages are the ones that decided to ignore this approach.

                        • nasretdinov 3 days ago

                          The standard library uses unsafe for syscalls, for higher-performance primitives like strings.Builder, etc, so it's support is mandatory to run any non-trivial Go program

                          • mappu 3 days ago

                            For a while the GOOS=nacl port and the Google App Engine ports of Go disallowed unsafe pointer manipulation too, so there is some precedent. Throughout some of the ecosystem you can see pieces of "nounsafe" build tag support (e.g. in easyjson).

                            • pjmlp 2 days ago

                              Most programming languages that offer unsafe, either as language keyword, or meta package (unsafe/SYSTEM/UNSAFE whatever the name), have similar option, that doesn't make it less of a feature.

                          • zozbot234 3 days ago

                            Somehow I don't think Wasm-GC is going to support bare metal syscalls anytime soon. That stuff all has to be rewritten anyway if you want to target WASM.

                      • mappu 3 days ago

                        It also has an address-of operator, you can take the address of the middle of a large array.

                        I suppose that would be possible with fat-pointers that are reference+offset.

              • nicoburns 4 days ago

                > the Wasm GC extension, which would likely be needed for anything like real parity with JS

                Well, for languages that use a GC. People who are writing WASM that exceeds JS in speed are typically doing it in Rust or C++.

              • maxloh 4 days ago

                Yeah. If I remember it correctly, you need to compile the GC to run on WASM if the GC extension is not supported.

                • zozbot234 4 days ago

                  The GC extension is supported within browsers and other WASM runtimes these days - it's effectively part of the standard. Compiler developers are dropping the ball.

                  • AndrewDucker 3 days ago

                    The Wasm GC currently doesn't support the functionality needed by both Go and C#. (Interior pointers, for instance)

                    I'm hoping that a later version makes this possible.

            • DanielHB 4 days ago

              I did some perf benchmarks a few years ago on some JS code vs C code compiled to WASM using clang and running on V8 vs the same C code compiled to x64 using clang.

              The few cases that performed significantly better than the JS version (like >2x speed) were integer-heavy math and tail-call optimized recursive code, some cases were slower than the JS version.

              What I was surprised was that the JS version had similar performance to the x64 version with -O3 in some of my benchmakrs (like float64 performance).

              This was a while ago though when WASM support had just landed in browsers, so probably things got better now.

            • pjmlp 4 days ago

              Apparently not good enough, given the decision to use Go.

          • kevingadd 4 days ago

            Interop with a WASM-compiled Go binary from JS will be slower but the WASM binary itself might be a lot faster than a JS implementation, if that makes sense. So it depends on how chatty your interop is. The main place you get bogged down is typically exchanging strings across the boundary between WASM and JS. Exchanging buffers (file data, etc) can also be a source of slowdown.

          • maxloh 4 days ago

            Very likely. Migrating compute-intensive tasks from JavaScript was one of the explicit goals behind the invention of WASM.

  • no_wizard 4 days ago

    Like others I'm curious about the choice of technology here. I see you went with Go, which is great! I know Go is fast! But its also a more 'primitive' language (for lack of a better way of putting it) with no frills.

    Why not something like Rust? Most of the JS ecosystem that is moving toward faster tools seem to be going straight to Rust (Rolldown, rspack (the webpack successor) SWC, OXC, Lightning CSS / Parcel etc) and one of the reasons given is it has really great language constructs for parsers and traversing ASTs (I think largely due to the existence of `match` but i'm not entirely sure)

    Was any thought given to this? And if so what was the deciding factors for Go vs something like Rust or another language entirely?

    • prisenco 4 days ago

      | with no frills.

      People say this like it's a bad thing. It's not, it's Go's primary strength.

      • whattidywhat 3 days ago

        I can see the appeal. Not having to write C# style oop probably gave the team a huge productivity boost. I bet it compiles hundreds of times faster making the team, cicd, and dev efforts substantially more productive. Cohesive integrated modern tooling is also a huge plus. Project structure is considerably simpler... I am not really a go fan but I would chose it over c# in a majority of cases as well.

        I think they missed out by not going with Rust. It seems like the social factors weighed out. Probably hard to quickly assemble a rust team within msft. Again though that makes Go a practical choice. I don't see why people are so confused by it. Go is a pretty widely used and solid choice to get things done reliably and quickly these days.

        • commandersaki 3 days ago

          The reason they didn't do Rust is because it was faster and more reliable to port the compiler and Go was a strong match particularly because of struct layout, types, concurrency, etc. but most importantly because it was native code with automatic garbage collection which Rust simply doesn't have. There's a video of Anders talking specifically about this.

          • whattidywhat 3 days ago

            The automatic gc doesn't seem like an actual deal breaker though. They probably just didn't want to redesign a bunch of data types that assumed one existed.

            I'm not arguing saying they made a bad call. I think what they did was smart with the options in front of them and whatever budget they have. The world isn't good for idealism, but it ideally could have been written in rust in my opinion.

            • commandersaki 3 days ago

              > it ideally could have been written in rust in my opinion

              What exactly would that buy and would the outcome matter much?

              In my opinion: pragmatism > idealism.

              • Imustaskforhelp 3 days ago

                exactly , it sounds as if he is entitled to rust software.

                Golang is generally very fast and simple as well , the only problem is of memory allocation / garbage collector overhead but the benefits outweigh the loss.

                • whattidywhat 3 days ago

                  I do like rust and like I said, ideally in my opinion. I also did say choosing go was more pragmatic. We are on the same page believe it or not.

            • zveyaeyv3sfye 3 days ago

              > They probably just didn't want to redesign a bunch of data types that assumed one existed.

              Dude, just read the article being discussed, this is addressed so you can just stop making shit up.

              The audacity of people like you to just keep adding speculations upon speculations on a subject without even bothering to learn what is being discussed.

              • whattidywhat 3 days ago

                Yes they said there were issues with cyclic data structures. Not really speculating just wrote it funny. I get why they chose what they did and even said I agree with it.

        • zveyaeyv3sfye 3 days ago

          > I think they missed out by not going with Rust. It seems like the social factors weighed out.

          They absolutely address this in the linked article, so why are we even speculating here?

          > Probably hard to quickly assemble a rust team within msft.

          The same MSFT that is rewriting their Windows OS in rust as we speak? I think you should stop commenting when you don't know anything about the subject.

          • oldmanhorton 3 days ago

            Saying that Microsoft is "Rewriting Windows in rust" suggests you might not be as informed as you think... Very specific components with history of performance or security issues are getting ported in a very uncoordinated effort. Windows will be primarily C, C++, and C# for a very long time to come

            • whattidywhat 3 days ago

              Also those are two different skill sets. Writing critical sections of an OS is not the same thing as writing a compiler. And completely agree, windows to what I have read, is being deliberate and isn't doing a total rewrite at all. Thanks for chiming in so I could type less.

        • za3faran 3 days ago

          > Not having to write C# style oop probably gave the team a huge productivity boost.

          I wrote a lot of Go code as well as Java. When people say things like this, I'm not quite sure what exactly they are referring to. No one is forcing you to write mutli-level deep inheritance hierarchies in Java/C#, and Go itself is OOP. Structural typing has its issues as well. Where does this supposed inherent productivity boost lie?

        • Imustaskforhelp 3 days ago

          Dude not all things have to be written in rust just for the sake of it.

          Rust is really hard , compared to golang. This can increase outside contributors as well.

          Golang is love , Golang is life.

      • j-krieger 4 days ago

        Yes. For Webservers. Not for compilers. I wrote a bunch of compilers, and Go is not a language I would choose for this.

        • bilekas 4 days ago

          Go is exceptionally fast for a transpiler. Esbuild is a great example.. Rust would offer any significant gains vs adoption for support.

        • 9rx 3 days ago

          Unless, of course, you are not working on a greenfield project and instead are porting an existing compiler from Typescript, in which case Go is the language you would choose as it is the language that most closely resembles Typescript, allowing ease of bulk conversion by script. The same reason why it was chosen for Typescript.

          • j-krieger 2 days ago

            I read that response and I agree entirely! Porting Js to Rust isn’t feasible. I love Rust but it’s good to know your constraints.

    • DanRosenwasser 4 days ago

      We did anticipate this question, and we have actually written up an FAQ entry on our GitHub Discussions. I'll post the response below. https://github.com/microsoft/typescript-go/discussions/411.

      ____

      Language choice is always a hot topic! We extensively evaluated many language options, both recently and in prior investigations. We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript. We wrote multiple prototypes experimenting with different data representations in different languages, and did deep investigations into the approaches used by existing native TypeScript parsers like swc, oxc, and esbuild. To be clear, many languages would be suitable in a ground-up rewrite situation. Go did the best when considering multiple criteria that are particular to this situation, and it's worth explaining a few of them.

      By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.

      Go also offers excellent control of memory layout and allocation (both on an object and field level) without requiring that the entire codebase continually concern itself with memory management. While this implies a garbage collector, the downsides of a GC aren't particularly salient in our codebase. We don't have any strong latency constraints that would suffer from GC pauses/slowdowns. Batch compilations can effectively forego garbage collection entirely, since the process terminates at the end. In non-batch scenarios, most of our up-front allocations (ASTs, etc.) live for the entire life of the program, and we have strong domain information about when "logical" times to run the GC will be. Go's model therefore nets us a very big win in reducing codebase complexity, while paying very little actual runtime cost for garbage collection.

      We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.

      Acknowledging some weak spots, Go's in-proc JS interop story is not as good as some of its alternatives. We have upcoming plans to mitigate this, and are committed to offering a performant and ergonomic JS API. We've been constrained in certain possible optimizations due to the current API model where consumers can access (or worse, modify) practically anything, and want to ensure that the new codebase keeps the door open for more freedom to change internal representations without having to worry about breaking all API users. Moving to a more intentional API design that also takes interop into account will let us move the ecosystem forward while still delivering these huge performance wins.

      • electroly 4 days ago

        This is a great response but this is "why is Go better than JavaScript?" whereas my question is "why is Go better than C#, given that C# was famously created by the guy writing the blog post and Go is a language from a competitor?"

        C# and TypeScript are Hejlsberg's children; C# is such an obvious pick that there must have been a monster problem with it that they didn't think could ever be fixed.

        C# has all that stuff that the FAQ mentions about Go while also having an obvious political benefit. I'd hope the creator of said language who also made the decision not to use it would have an interesting opinion on the topic! I really hope we find out the real story.

        As a C# developer I don't want to be offended but, like, I thought we were friends? What did we do wrong???

        • fixprix 4 days ago

          Anders answers that question here - https://www.youtube.com/watch?v=10qowKUW82U&t=1154s

          Transcript: "But I will say that I think Go definitely is much more low-level. I'd say it's the lowest level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In contrast, C# is sort of bytecode-first, if you will. There are some ahead-of-time compilation options available, but they're not on all platforms and don't really have a decade or more of hardening. They weren't engineered that way to begin with. I think Go also has a little more expressiveness when it comes to data structure layout, inline structs, and so forth."

          • electroly 4 days ago

            This is a great link, thank you!

            For anyone who can't watch the video, he mentions a few things (summarizing briefly just the linked time code, it's worth a watch):

            - Go being the lowest level language that still has garbage collection

            - Inline structs and other data structure expressiveness features

            - Existing JS code is in a C-like function+data structure style and not an OOP style, this is easier to translate directly to Go while C# would require OOPifying it.

          • WuxiFingerHold 4 days ago

            Thanks for the link. I'm not fully convinced by Anders answer. C# has records, first class functions, structs, span. Much control and I'd say more than Go. I'd even say C# is much closer to TS than Go is. You can use records for the data structures. The only little annoyance is that you need to write the functions as static methods. So an argument for easy translation would lead to C#. Also, C# has advantages over Go, e.g. null safety.

            Sure, AOT is not as mature in C# but is this reason enough to be a show stopper? It seems there're other reasons Anders don't want to address publicly. Maybe as simple reasons as "Go is 10 times easier to pick up than C#" and "language features don't matter when the project matters". Those would indeed hurt the image of C# and Anders obviously don't want that.

            But I don't see it as big drama.

            • mexicocitinluez 3 days ago

              I dont think there are other reasons.

              The side-by-sides that show how Go code is closer to the current TS code (visually) than C# would be are pretty compelling. He made it pretty clear they're "porting" not rewriting.

              • WuxiFingerHold 3 days ago

                After reading the long Github thread, I think you're right. It's probably just as simple as "what is the easiest way to copy our TS code 1:1 to a faster language". And this case Go wins due to its simplicity.

                • mexicocitinluez 2 days ago

                  What's funny is that while I understood a chunk about why that made that decision, a ton of things they were talking about went over my head. But then we they showed the side-by-side, I was like "Well that makes sense".

            • saturn_vk 2 days ago

              > You can use records for the data structures. The only little annoyance is that you need to write the functions as static methods. So an argument for easy translation would lead to C#. Also, C# has advantages over Go, e.g. null safety.

              Wouldn't these things be useful if you are making an actual compiler, that would run TS? Since in this case, the runtime is JS, I don't think any of these things would get any usage, unless they are used in the existing transpiler.

          • vips7L 4 days ago

            An unpopular pick that is probably more low level than Go but also still has a GC: D. Understandable why you wouldn't pick D though. Its ecosystem is extremely small.

            • 999900000999 4 days ago

              I think you D fans need to dogfood a startup based around it.

              It's a fascinating language, but it lacks a flagship product.

              I feel the same way about Haxe. Someone created an amazing language, but it lacks a big enough community.

              Realistically languages need 2 things for adoption. Momentum and ease of use. Rust has more momentum than ease, but arguably can solve problems higher level languages can't.

              I'm half imagining a hackathon like format where teams are challenged to use niche languages. The foundations behind these languages can fund prizes.

              • vips7L 4 days ago

                Did my post come off as a fan? I directly criticized its ecosystem. It wouldn't be my first pick either. I was just making conversation that there are other options.

                And AFAIK Symmetry Investments is that dogfood startup.

          • polskibus 4 days ago

            A missed opportunity to improve c# by dogfooding it with TS compiler rewrite.

            • geodel 4 days ago

              They are trying to finish their current project and not redo all the projects which their current project may depend upon.

              • cxr 4 days ago

                "Finish"?

            • mohsen1 4 days ago

              C# is too old to change that drastically, just like me

          • Rapzid 3 days ago

            Sounds like C# was too late with dictionary collection expressions.

        • jodrellblank 4 days ago

          > "given that C# was famously created by the guy writing the blog post"

          What is this logic? "You worked on C# years ago so you must use C# for everything"?

          "You must dictate C# to every team you lead forever, no matter what skills they have"?

          "You must uphold a dogma that C# is the best language for everything, because you touched it last"?

          Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those? Because there is no logic; the person who created hammers doesn't have to use hammers to solve every problem.

          • uticus 4 days ago

            Yes, but C# is the Microsoft language, and I would say TypeScript is 2nd place Microsoft language (sorry F# folks - in terms of popularity not objective greatness of course).

            So it's not just that the lead architect of C# is involved in the TypeScript changes. It's also that this is under the same roof and the same sign hangs on the building outside for both languages.

            If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?

            • _bin_ 4 days ago

              funny you bring up this analogy. tons of auto manufacturers these days will license other mfgs' engines and use them in your cars. e.g. a fair number of Ford's cars have had Mazda engines and a fair number of Mazdas have had Ford engines.

              • nindalf 4 days ago

                Could you give some examples of both? Also, why did they choose to do this?

                • kcrwfrd_ 4 days ago

                  Toyota 86 and Subaru BRZ are basically the same car. The car was designed by Toyota while Subaru supplied the engine. Just one example.

                  • sellmesoap 3 days ago

                    I think Toyota also owns a significant amount of Subaru, probably other mfgs. as well, Toyota is the 600lb gorilla of the car industry.

                • _bin_ 3 days ago

                  Sure. Mazda’s CX-3/5/9 in the aughts and early teens often had licensed Ford engines. The current Ford Tourneo Connect has a wholly VW-manufactured engine.

                  It’s probably most common when an automaker introduces a new make and wants to save time and capital on developing and getting into production a new engine.

                • sellmesoap 3 days ago

                  The last model of ford ranger and Mazda B series light pickup trucks were mostly the same except for the badges.

                  The Toyota matrix and Pontiac vibe used a lot of the same parts shared an engine and drivetrains if I'm not mistaken.

            • jodrellblank 4 days ago

              > "So it's not just that the lead architect of C# is involved in the TypeScript changes."

              Anders Hejlsberg hasn't been the lead architect of C# for like 13 years. Mads Torgersen is:

              https://dotnetcore.show/episode-104-c-sharp-with-mads-torger... - "I got hired by Microsoft 17 years ago to help work on C#. First, I worked with Anders Hejlsberg, who’s sort of the legendary creator and first lead designer of C#. And then when he and I had a little side project with others to do TypeScript, he stayed over there. And I got to take over as lead designer C#. So for the last, I don’t know, nearly a decade, that’s been my job at Microsoft to, to take care of the evolution of the C# programming language"

              Years later, "why aren't you using YOUR LANGUAGE, huh? What's the matter, you don't like YOUR LANGUAGE?" is pushy and weird; he's a person with a job, not a religious cult leader.

              > "If Ford made a car and powered it with a Chevy engine, wouldn't you be curious what was going on also?"

              Like these? https://www.slashgear.com/1642034/fords-powered-by-non-ford-...

              • cxr 4 days ago

                > "why aren't you using YOUR LANGUAGE, huh? What's the matter, you don't like YOUR LANGUAGE?" is pushy and weird

                It's also not what anyone said.

                > It's best not to use quotation marks to make it look like you're quoting someone when you're not. <https://news.ycombinator.com/item?id=21643562>

                • jodrellblank 3 days ago

                  Why is that "best" ?

                  > "An indirect quote lets you capture or summarize what someone said or wrote without using their exact words. It helps to convey the tone or meaning of your source without quoting them directly." - https://www.grammarly.com/blog/punctuation-capitalization/qu...

                  I'm distilling and exaggerating multiple the comments to convey the tone and meaning of the bit I want to focus on. Asking "why not C#?" has the implicit framing "it should be C# by default and you have to justify why not" and calling out that bias to show it to be unreasonable is the intent.

                  • cxr 3 days ago

                    [flagged]

              • uticus 3 days ago

                > Like these...

                Nope. None of those are even close to Ford + Chevy. (Ford + Mazda is well known of course).

                I chose the analogy carefully.

                • jodrellblank 3 days ago

                  The analogy missed because I am not American or a car enthusiast.

            • andy81 4 days ago

              F# isn't in the running for third either.

              Maybe top ten behind MSSQL, Powershell, Excel Formulae, DAX etc.

              • sterlind 4 days ago

                hey, there are dozens of us F# users! dozens!

                I do love F#, but its compiler is a rusty set of monkey bars. It's somehow single pass, meaning the type checker will struggle if you don't reorder certain expressions - but also dog slow, especially for `inline` definitions (which work more like templates or hygienic macros than .net generics, and are far more powerful.) File order matters, bafflingly! Newer .net features like spans and ref structs are missing with no clear path to implementation. Doing moderately clever things can cause the compiler to throw weird, opaque, internal errors. F# is built around immutability but there's no integration with the modern .net immutable collections.

                It's clearly languishing and being kept alive by a skeleton crew, which is sad, because it deserves better, but I've used research prototypes less clunky than what ought to be a flagship.

                • throw234234234 3 days ago

                  There's more than a dozen - I should know. I've seen quite a few large systems built in it. Most of the time however it isn't well advertised (finance, insurance, etc).

                  - I don't think the compiler is actually that bad, and yes - inline definitions I think once you are going on the "templating route" are going to be slower. Spans and ref structs are there - I think the design of them is more intuitive actually - the C# "ref struct" at first glance sounds like an oxymoron to me.

                  - modern .net immutable collections - in testing these are significantly slower than some of the F# options especially when you go away from the standard lib and use some of the other collection libraries. The algorithms within the C# immutable libs were not as optimal for some common collection types. They didn't feel modern last time I used them and I was forced to switch to the F# ones and/or others in the F# ecosystem to get the performance I needed. Immutable code felt MUCH more idiomatic with F#.

                  - "Doing moderately clever things can cause the compiler to throw weird, opaque, internal errors" - happened with init fields for me; can't recall another time.

                  Don't mind the file order bit - I thought OCAml and a few other languages also do this. Apps still scale OK, and when I was coding in it got me out of a few spaghetti code issues as the code scaled up to about the 500,000+ LOC mark.

                  However I do agree with you on it being kept alive by skeleton crew - I think the creators and tooling staff have moved on to the next big thing (AI and specifically Github Copilot). Which the way things are moving will raise some interesting questions about all coding languages in general potentially.

                • debugnik 4 days ago

                  > Newer .net features like spans and ref structs are missing with no clear path to implementation

                  Huh? They're already implemented! It took years and they've still got some rough edges, yes, but they've been implemented for a few years now.

                  Agreed with the rest, though. As much as I love working with F#, I've jumped ship.

            • jay_kyburz 4 days ago

              It's a bad look for both C# and TypeScript. Anybody starting a new code base now would be looking for ways to avoid both and jump right to Go.

              • sethaurus 4 days ago

                I'm struggling to understand how this is a bad look for Typescript. Do you mean that the specific choice of Go reflects poorly on Typescript, or just the decision to rewrite the compiler in a different non-TS language?

                If it's the latter, I think the pitch of TS remains the same — it's a better way of writing JS, not the best language for all contexts.

                • jay_kyburz 4 days ago

                  I think a lot of folks downplay the performance costs for the convenience of a shared code-base between the front and backend.

                  If the TS team is getting a 10x improvement moving from TS to Go, you might imagine you could save about 10x on your server cpu. Or that your backend would be 10x more responsive.

                  If you have dedicated team for front and back anyhow, is a 10x slow down really worth a shared codebase?

              • bdangubic 4 days ago

                if I had to use Go I’d change my career and go do some gardening :)

                • bbkane 4 days ago

                  I actually really enjoy Go. Sure it has a type system I wish was more powerful with lots of weird corners ( https://100go.co/ ), but it also has REALLY GOOD tooling- lots of nice libraries, the compiler is fast, the editor tooling is rock solid, it's easy to add linters to warn you about many issues (golangci-lint), and releasing binaries and updating package repositories is super nice (Goreleaser).

                • ricardobeat 4 days ago

                  I'd probably have said the same 5 years ago, it's surprising how easy you change sides once you actually use it in a team.

                  • bdangubic 4 days ago

                    I was mostly joking… some of the most amazing shit code-wise I have seen in “non-mainstream” languages (fortran leads the way here)

                • keyle 4 days ago

                  I had to, and I do think a lot about gardening these days...

                • zeroc8 4 days ago

                  I like Anders'answer there. "But you can achieve pretty great things with it".

                • saturn_vk 2 days ago

                  Because it's so easy that you'd have a lot more time for gardening?

              • moogly 4 days ago

                If they're writing (actually porting) a _compiler_, perhaps.

              • osigurdson 4 days ago

                Go doesn't run in the browser however (except WASM but that is different).

          • Hello71 4 days ago

            > Why aren't you using this logic to argue that they should use Delphi or TurboPascal because Anders Hejlsberg created those?

            as you know full well, Delphi and Turbo Pascal don't have strong library ecosystems, don't have good support for non-Windows platforms, and don't have a large developer base to hire from, among other reasons. if Hejlsberg was asked why Delphi or Turbo Pascal weren't used, he might give one or more of those reasons. the question is why he didn't use C#, for which those reasons don't apply.

        • whattidywhat 3 days ago

          I'm not saying this to start a language war but. Look at the cognitive complexity and tooling complexity involved in a c# project. Seriously, every speed bump you hit in your ide think about how many pieces of knowledge you assemble to solve it. Similarly think about the overhead in designing both the software and tests. Think about cross platform builds and the tooling required to stand up ops infrastructure. Measure the compilation time. Think about the impedance mismatch between ts and c#.

          Compare that to go. It's not even close. I see comments bickering about the size of executable files... Almost no major product cares about that within order of magnitude.

          Go is a wild choice to write a compiler in. Literally in my top 10 things I never want to do. Everything else about it drove them to do it.

        • cryptonector 4 days ago

          GP's answer is a great answer to why Go instead of Rust, which u/no_wizard asked about. And the answer to that boils down to the need to traverse data structures in ways which Rust makes difficult, and the simplicity of a GC.

        • WD-42 4 days ago

          [flagged]

          • _bin_ 4 days ago

            C# is a decently-designed language, but its first principles are being microsoft-y and java-y, which are perhaps two of my least favorite principles. that aside, i've worked on C# backends deployed to lots of linux boxes and it's not really second-rate these days.

          • debugnik 4 days ago

            Microsoft's implementation has been cross platform for almost a decade now. You're way too late to the Mono FUD party.

            • WD-42 4 days ago

              Almost a decade? Amazing. Considering go has been cross platform since its inception almost twice as long as that, rust too, it’s no wonder developer mindshare is elsewhere.

        • almosthere 4 days ago

          .NET executables requires a runtime environment to be installed.

          Go executables do not.

          TSC is installed in too many places for that burden to be placed all of a sudden. It is the same reason why Java has had a complicated acceptance history too. It's fine in the places that it is pre-installed, but no where else.

          Node/React/Typescript developers do not want to install .net all of a sudden. If you react that poorly, pretend they decided they decided to write it in Java and ask if you think Node/React/Typescript developers WANT to install Java.

          • afavour 4 days ago

            FYI this hasn’t been the case with C# for a very long time now.

          • vips7L 4 days ago

            .NET has been able to build a self contained single file executable for both the JIT and AOT target for a quite some time. Java also does not require the user to install a runtime. JLink and JPackage have both been around for a long time.

            • mytec 3 days ago

              Yes, but they may not always work. While generally true there are still some edge cases.

              SQL Server connections are one example where I do get .exe with the .pdb in the publish directory but the .exe won't run correctly without the "Microsoft.Data.SqlClient.SNI.dll" file.

              Another example are any libraries that have "RequiresDynamicCodeAttribute" requirements.

            • yakz 4 days ago

              Maybe some other runtimes do this or it has been changed, but in the past self-contained singe-file .NET deployment just meant that it rolled all the files up during publishing and when you run it, it extracted them to a folder. Not really like a single statically linked executable.

            • estebarb 4 days ago

              C# AOT filesizes are huge compared to Go.

              • MStrehovsky 4 days ago

                Do you have data backing that up? Per https://github.com/MichalStrehovsky/sizegame:

                C#: 945 kB Go: 2174 kB

                Both are EXEs you just copy to the machine, no separate runtime needed, talks directly to the OS.

                • estebarb a day ago

                  Sadly yes, we have data. We are migrating our C# SDK to Rust in part because customers want a much smaller dependency. And the AoT compiler didn't trimmed as much as we wanted.

                  • neonsunset a day ago

                    (regarding size - there are tools like sizoscope to understand what is taking space, sometimes it’s something silly like rooting a lot of metadata with reflection referencing many assemblies or because of abusing generic virtual members with struct parameters in serialization, obviously if you can use Rust without productivity loss it’s fine, but usually problems like that take an hour or two to solve, or less)

                    But in either case binary sizes are smaller and more scalable than what Go produces. The assumption that Go is good at compact binaries just does not replicate in reality. Obviously it’s nice to be able not touch it at all and opting into Rust for distributing native SDKs. Go is completely unfit for this as it has even more invasive VM interaction when you use it as dynamically linked library. NativeAOT is “just ok” at this and Go is “way more underwhelming than you think”.

                    • estebarb 8 hours ago

                      I think we would have preferred continuing with .NET, as no one is Rust expert on the team. But binary size and lack of some SIMD instructions moved the balance to Rust. And then, the PoC had big memory usage improvements, so...

                      • neonsunset 7 hours ago

                        What kind of SIMD instructions were not available? I assume something like AVX512GFNI or SHA x86 intrinsics?

                        I think if you're in domain of using SIMD, besides base RAM usage of 2-5MB you should not see drastic difference unless you have a lot of allocation traffic. But I guess Rust solved most of this, just wanted to note that specific memory and deployment requirements are usually solvable by changing build and GC settings.

        • vessenes 4 days ago

          It’s a political anti-benefit in most of the open-source world. And C# is not considered a high quality runtime once you leave Windows.

          • electroly 4 days ago

            This is Anders Hejlsberg, the creator of C#, working on a politically important project at Microsoft. That's what I mean by political benefit. The larger open source world doesn't matter for this decision which is why this is a simple announcement of an internal Microsoft decision rather than an invitation for comments ahead of time.

            • vessenes 4 days ago

              I’m sure Microsoft’s strategy department would disagree with you. As a c# devotee - I get that you’re upset. And you may want to update your priors on where c# sits in Microsoft’s current world. But I think it’s a mistake to imagine this isn’t a well reasoned decision.

              • electroly 4 days ago

                They can disagree if they want but as a career-long Microsoft developer they can't fool me that easily. I'm not even complaining, I'm just stating a fact that high-level steering decisions like this are made in Teams meetings between Microsoft employees, not in open discussion with the community. It's the same in .NET, which is a very open source project whose highest-level decisions are, nonetheless, made in Teams meetings between Microsoft employees and then announced to the public. I'm fine with this but let's not kid ourselves about it.

                That said, I must have misstated my opinion if it seems like I didn't think they have a good reason. This is Anders Hejlsberg. The guy is a genius; he definitely has a good reason. They just didn't say what it is in this blog post (but did elsewhere in a podcast video linked in the HN thread).

            • IshKebab 4 days ago

              > The larger open source world doesn't matter for this decision

              It obviously does because the larger open source world are huge users of Typescript. This isn't some business-only Excel / PowerBI type product.

              To put it another way, I think a lot of people would get quite pissed if tsc was going to be rewritten in C# because of the obvious headaches that's going to cause to users. Go is pretty much the perfect option from a user's point of view - it generates self-contained statically linked binaries.

            • thayne 4 days ago

              It would have a substantial risk for the typescript project. Many people would see it as an unwanted and hostile push of a Microsoft technology on the typescript community.

              And there would be logistical problems. With go, you just need to distribute the executable, but with c#, you also need a .net runtime, and on any platform that isn't Windows that almost certainly isn't already installed. And even if it is, you have to worry if the runtime is sufficiently up to date.

              If they used c# there is a chance the community might fork typescript, or switch to something else, and that might not be a gamble MS would want to take just to get more exposure for c#.

            • smooth_criminal 4 days ago

              Okay, not to be petty here but, it's important to note that on his GitHub he did not star the dotnet repository but has starred multiple go repos and multiple other c++ and TS repos

          • fabian2k 4 days ago

            Modern C# (.NET Core and newer) works perfectly fine on Linux.

          • naasking 4 days ago

            > And C# is not considered a high quality runtime once you leave Windows.

            By who?

            • cyral 4 days ago

              Usually by someone who hasn't used C# since 2015 (when this opinion was fairly valid)

              • WD-42 4 days ago

                It’s always the same response, c# was crappy but it’s not crappy anymore. Well guess what, Go has been not crappy for a lot longer than C# has been not crappy, maybe that’s part of the reason people like it more.

                • naasking 3 days ago

                  > Well guess what, Go has been not crappy for a lot longer than C# has been not crappy, maybe that’s part of the reason people like it more.

                  Nobody said anything about who likes what more, nor does that even matter in the context of the original claim that .NET doesn't have a good runtime outside of Windows.

      • 999900000999 4 days ago

        I personally find Go miles easier than Rust.

        Is this the ultimate reason,Go is fast enough without being overally difficult. I'm humbly open to being wrong.

        While I'm here, any reason Microsoft isn't sponsoring a solid open source game engine.

        Even a bit of support for Godot's C#( help them get it working on web), would be great.

        Even better would be a full C# engine with support for web assembly.

        https://github.com/godotengine/godot/issues/70796

        • cardanome 4 days ago

          > Even a bit of support for Godot's C#( help them get it working on web), would be great.

          They did that. https://godotengine.org/article/introducing-csharp-godot/

          At least some initial grant to get it started.

          Getting C# working on web would be an amazing. It is already on the roadmap but some sponsorship would help tremendously for sure.

          • 999900000999 4 days ago

            Ok. Credit where credit is due, but considering the sheer value of having the next general of programmers comfortable with .net, Microsoft *should* chip in more.

            • Ray20 3 days ago

              It seems Microsoft is not betting on C# and I think the main reason for this is that C# isn't futureproof because of it's ugliness.

              It is a powerful and robust language with great standard library, but you just cant be comfortable with it. All those boilerplate, all those sealed override virtual public protected or whatnot before each statement, those curly braces everywhere. You are always inside classes that are inside namespace, and even then you need to go deeper and have curly braces with properties and arrows in random places. Delegates and Events are ugly and unintuitive, two set of syntax for linq (and honestly for almost any somewhat new feature of the language), ref in out, you name it. It is hard to push something so inelegant.

              • WorldMaker 2 days ago

                A lot of the last few C# compiler versions have been about "boiler-plate" reduction. Namespaces don't need curly braces any more and are just a single line at the top. You can write some top-level code inside a namespace without it needing to be in a class. More of the properties and method bodies that are simple can also be written entirely with arrows without curly braces.

                Delegates and Events were a mistake, but that's a low-level .NET mistake that a lot of modern code can easily ignore, with Action<> and Func<> now reliably almost everywhere and WinForms easy to write off as "dead". (You can especially eliminate the need for the ugliness of Delegates and Events with System.Reactive.Linq.)

                Records and Primary Constructors remove a ton of the boiler-plate of writing basic "DTOs" and/or dependency injection.

                C# is pretty elegant, and a nicely evolving language. Microsoft isn't any longer trying to bet on C# as a "systems programming language" because too many people see JIT support and VMs as "not low level enough" (including apparently also Anders Hejlsberg), but that doesn't mean C# isn't "future proof".

            • 9rx 4 days ago

              Hasn't Microsoft largely hitched their horse to Go these days, though (not just this project)? They even maintain their own Go compiler: https://github.com/microsoft/go

              It is a huge company. They can do more than one thing. C#/.NET certainly isn't dead, but I'm not sure they really care if you do use it like they once did. It's there if you find it useful. If not, that's cool too.

              • 999900000999 4 days ago

                We're talking about a nominal amount of funding to effectively train 10s of thousands of developers.

                I think Microsoft can find the money if they wanted to.

                • 9rx 4 days ago

                  I'm sure Microsoft could find the money to do a lot of different things. But why that instead of the infinite alternatives that the money could be spent on instead?

            • reactordev 4 days ago

              History has shown Microsoft abandoning any gamedev toolkit or sdk they “support”. Managed DirectX, XNA, etc.

              Personally, I would like them to never touch the game dev side of the market.

          • WorldMaker 2 days ago

            > Getting C# working on web would be an amazing.

            People have been using Blazor WASM in Production for more than a year now. It's been stable since .NET 8.

        • pjmlp 3 days ago

          They do, Unreal and Unity tooling for Visual Studio, it is even part of the installer.

          Also some of the low level improvements on C# have been done with collaboration with Unity team's requirements, regarding their Burst use cases.

        • tonyhart7 4 days ago

          "any reason Microsoft isn't sponsoring a solid open source game engine"

          I can see they do this in the future tbh, given how large their xbox gaming ecosystem, this path is very make sense since they can cut cost while giving option to their studios or indie developers

          • 999900000999 4 days ago

            While I'm dreaming of things that will never ever happen, I would absolutely love for them to buy the game engine side of Unity and open source it.

            • runevault 4 days ago

              Unless I missed Unity sorting a ton of stuff out, I assume they're going to have to sell themselves off for parts at some point after the runtime fee fiasco that was supposed to make them profitable lead to developers being angry or outright leaving the ecosystem. My assumption if that happens unless the DOJ gets involved for some reason is MS buys it for this reason.

              • 999900000999 3 days ago

                Unity isn't really worth 9 billion imo.

                I would like MS to buy them out and FOSS the engine. Maybe if they split the ad business off into its own thing.

                Unity feels like a bizarre almost abusing business relationship. They can change the terms of service at will.

                The licensing is confusing. Billy is a freelancer. He makes a small game for his friend company. His friends company raises a funding round.

                Depending on how much money is raised, Unity is going to call Billy up and extort him to upgrade to a higher license tier.

                I don't particularly like Godot, but every few months I try and learn it again.

                The game engine landscape is like picking the least worst option.

                All that to be fair, Unity provided a high quality game engine for effectively nothing for over a decade to the vast majority of its users. It's time to pay the piper.

      • sime2009 4 days ago

        > we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.

        Cool. Can you tell us a bit more about the technical process of porting the TS code over to Go? Are you using any kind of automation or translation?

        Personally, I've found Copilot to be surprisingly effective at translating Python code over to structurally similar Go code.

      • fabian2k 4 days ago

        I find the discussion about the choice quite interesting, and many points are very convincing (like the GC one). But I am a bit confused about the comparison between Go and C#. Both should meet most of the criteria like GC, control over memory layout/allocation and good support for concurrency. I'm curious what the weaknesses of C# for this particular use case were that lead to the decision for Go.

        • zeroc8 4 days ago

          Anders is answering this in the Video. Go is the lower level and also closer to Javascript's programming style. They didn't want to fully object oriented for this project.

        • typ 4 days ago

          C# is fine. But last I checked, the AOT compilation generates a bunch of .dll files, which are not suitable for a CLI program like Go's zero dependencies binary.

          • fabian2k 4 days ago

            C# can create single-binary executables, even without native AOT.

            • cardanome 4 days ago

              They are still going to significant bigger than the equivalent golang binary because of the huge .NET runtime, no?

              • zigzag312 4 days ago
                • xarope 4 days ago

                  Since this is just Hello World, then TinyGo: 644kB

                • ricardobeat 4 days ago

                  Is this a fair comparison, won't doing anything more significant than `print` in C# require a .NET framework to be installed (200MB+)?

                  • neonsunset 4 days ago

                    No. This is normal native compilation mode. As you reference more features from either the standard library or the dependencies, the size of the binary will grow (sometimes marginally, sometimes substantially if you are heavily using struct generics with virtual members), but on average it should be more scalable than Go’s compilation model. Even JIT-based single-file binaries, with trimming, take about ~13-40 MB depending on the task. The runtime itself AFAIK, if installed separately, is below 100MB (installing full SDK takes more space, which is a given).

                  • ChocolateGod 4 days ago

                    Spending ages slamming your head on your keyboard because you get a dll error or similar running a .NET app and just can't find the correct runtime version / download is a great past time.

                    then when you find the correct version but you then have to install both the x86 and x64 version because the first one you installed doesn't work

                    yeh, great ecosystem

                    at least a Go binary runs 99.99999% of the time when you start it.

                • Rapzid 3 days ago

                  Yeah and I doubt many people care if the TS compiler is 200MB anyway LOL. It's 2025.

              • fabian2k 4 days ago

                Depends on how well trimming works. It's probably still larger than Go even with trimming, but Go also has a runtime and won't produce tiny binaries.

          • pjmlp 4 days ago

            You can choose how the linking process is done, just like you can chose to have a a Go binary with dependencies.

          • SkiFire13 4 days ago

            C# has an option to publish to a single self-contained file.

            • osigurdson 4 days ago

              It would be big enough that people would find it annoying (unless using AOT which is hard).

      • skybrian 4 days ago

        It seems like, without mentioning any language by name, this answers "why not Rust" better than "why not C#."

        I don't think Go is a bad choice, though!

      • acomagu 4 days ago

        Personally, I want to know why Go was chosen instead of Zig. I think Zig is really more WASM-friendly than Go, and it's much more similar to JavaScript than Rust is.

        Memory management? Or a stricter type system?

        • throw16180339 4 days ago

          Zig isn't memory safe, has regular breaking changes, and doesn't have a garbage collector.

        • commandersaki 3 days ago

          First reason in my mind is there isn't an abundance of Zig programmers internally in Microsoft, in the job market, and in open source. It's probably a fine choice if you're using it for your passion project e.g. Hashimoto.

        • smarx007 4 days ago

          For being production-ready?

        • pjmlp 3 days ago

          Zig still isn't production ready and isn't memory safe as Go, most likely.

      • breadwinner 4 days ago

        So when can we expect Go support in Visual Studio? I am sold by Anders' explanation that Go is the lowest language you can use that has garbage collection!

        • pebal 4 days ago

          You can also have GC in C++ and generate even faster code.

      • mavelikara 4 days ago

        Thanks for the thoughtful response!

  • noodletheworld 4 days ago

    Go is quite difficult to embed in other applications due to the runtime.

    What do you see as the future for use cases where the typescript compiler is embedded in other projects? (Eg. Deno, Jupyter kernels, etc.)

    There’s some talk of an inter process api, but vague hand waving here about technical details. What’s the vision?

    In TS7 will you be able to embed the compiler? Or is that not supported?

    • mappu 4 days ago

      Go has buildmode=c-shared, which compiles your program to a C-style shared library with C ABI exports. Any first call into your functions initializes the runtime transparently. It's pretty seamless and automatic, and it'll perform better than embedding a WASM engine.

    • DanRosenwasser 4 days ago

      We are sure there will be a way to embed via something like WebAssembly, but the goal is to start from the IPC layer (similar to LSP), and then explore how possible it will be to integrate at a tighter level.

    • _benton 4 days ago

      Golang is actually pretty easy to embed into JS/TS via wasm. See esbuild.

      • curtisblaine 4 days ago

        Esbuild is distributed as a series of native executables that are selectively installed by looking at arch and platform. Although you can build esbuild in wasm (and that's what you use when you run it in the browser), what you actually run from .bin in the CLI is a native executable, not wasm.

    • nine_k 4 days ago

      Why embed it if you can run a process alongside yours and use efficient IPC? I suppose the compiler code should not be in some tight loop where an IPC boundary would be a noticeable slowdown. Compilation occurs relatively rarely, compared to running the compiled code, in things like Node / Deno / Bun / Jupyter. LSPs use this model with a pretty wasteful XML IPC, and they don't seem to feel slow.

      • wokwokwok 4 days ago

        Because running a parallel process is often difficult. In most cases, the question becomes:

        So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC? How do you shut it down when the 'host' process dies?

        Not vaguely. Not hand wave "just launch it". How exactly do you do it?

        How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.

        How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?

        When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.

        So now each kernel process has to manage another process, which it talks to via IPC?

        ...

        Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.

        • jchw 4 days ago

          > So, how exactly is my app/whatever supposed to spin up a parallel process in the OS and then talk to it over IPC?

          Usually the very easiest way to do this is to launch the target as a subprocess and communicate over stdin/stdout. (Obviously, you can also negotiate things like shared memory buffers once you have a communication channel, but stdin/stdout is enough for a lot of stuff.)

          > How do you shut it down when the 'host' process dies?

          From the perspective of the parent process, you can go through some extra work to guarantee this if you want; every operating system has facilities for it. For example, in Linux, you can make use of PR_SET_PDEATHSIG. Actually using that facility properly is a bit trickier, but it does work.

          However, since the child process, in this case, is aware that it is a child process, the best way to go about it would be to handle it cooperatively. If you're communicating over stdin/stdout, the child process's stdin will close when the parent process dies. This is portable across Windows and UNIX-likes. The child process can then exit.

          > How do you do it in environments where that capability (spawning arbitrary processes) is limited? eg. mobile.

          On Android, there is nothing special to do here as far as I know. You should be able to bundle and spawn a native process just fine. Go binaries are no exception.

          On iOS, it is true that apps are not allowed to spawn child processes, as far as I am aware. On iOS you'd need a different strategy. If you still want a native code approach, though, it's more than doable. Since you're on iOS, you'll have some native code somewhere. You can compile Go code into a Clang-compatible static library archive, using -buildmode=c-archive. There's a bit more nuance to it to get something that will link properly in iOS, but it is supported by Go itself (Go supports iOS and Android in the toolchain and via gomobile.) Once you have something that can be linked into the process space, the old IPC approach would continue to work, with the semantic caveat that it's not technically interprocess anymore. This approach can also be used in any other situation you're doing native code, so as long as you can link C libraries.

          If you're in an even more restrictive situation, like, I dunno, Cloudflare Pages Functions, you can use a WASM bundle. It comes at a performance hit, but given that the Go port of the TypeScript compiler is already roughly 3.5x faster than the TypeScript implementation, it probably will not be a huge issue compared to today's performance.

          > How do you package it so that you distribute it in parallel? Will it conflict with other applications that do the same thing?

          There are no particular complexities with distributing Go binaries. You need to ship a binary for each architecture and OS combination you want to support, but Go has relatively straight-forward cross-compiling, so this is usually very easy to do. (Rather unusually, it is even capable of cross-compiling to macOS and iOS from non-Apple platforms. Though I bet Zig can do this, too.) You just include the binary into your build. If you are using some bindings, I would expect the bindings to take care of this by default, making your resulting binaries "just work" as needed.

          It will not conflict with other applications that do the same thing.

          > When you look at, for example, a jupyter kernel, it is already a host process launched and managed by jupyter-lab or whatever, which talks via network chatter.

          > So now each kernel process has to manage another process, which it talks to via IPC?

          Yes, that's right: you would have to have another process for each existing process that needs its own compiler instance, if going with the IPC approach. However, unless we're talking about an obscene number of processes, this is probably not going to be much of an issue. If anything, keeping it out-of-process might help improve matters if it's currently doing things synchronously that could be asynchronous.

          Of course, even though this isn't really much of an issue, you could still avoid it by going with another approach if it really was a huge problem. For example, assuming the respective Jupyter kernel already needs Node.JS in-process somehow, you could just as well have a version of tsc compiled into a Node-API module, and do everything in-process.

          > Certainly, there are no obvious performance reasons to avoid IPC, but I think there are use cases where having the compiler embedded makes more sense.

          Except for browsers and edge runtimes, it should be possible to make an embedded version of the compiler if it is necessary. I'm not sure if the TypeScript team will maintain such a version on their own, it remains to be seen exactly what approach they take for IPC.

          I'm not a TypeScript Compiler developer, but I hope these answers are helpful in some way anyways.

          • noodletheworld 4 days ago

            Thanks for chiming in with these details, but I would just like to say:

            > It will not conflict with other applications that do the same thing.

            It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.

            For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?

            Not conflicting is not a property of parallel binary deployment and communication via IPC by default.

            IPC is, by definition intended to be accessible by other processes.

            Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.

            However, you'd have to rely on that mechanism being built into the typescript compiler service.

            ...ie. it's a bit complicated right?

            Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).

            • nine_k 4 days ago

              > Not conflicting is not a property of parallel binary deployment

              I fail to see how starting another process under an OS like Linux or Windows can be conflicting. Don't share resources, and you're conflict-free.

              > IPC is, by definition intended to be accessible by other processes

              Yes, but you can limit the visibility of the IPC channel to a specific process, in the form of stdin/stdout pipe between processes, which is not shared by any other processes. This is enough of a channel to coordinate creation of a more efficient channel, e.g. a shmem region for high-bandwidth communication, or a Unix domain socket (under Linux, you can open a UDS completely outside of the filesystem tree), etc.

              A Unix shell is a thing that spawns and communicates with running processes all day long, and I'm yet to hear about any conflicts arising from its normal use.

              • noodletheworld 4 days ago

                This seems like an oddly specific take on this topic.

                You can get a conflicting resource in a shell by typing 'npm start' twice in two different shells, and it'll fail with 'port in use'.

                My point is that you can do not conflicting IPC, but by default IPC is conflicting because it is intended to be.

                You cannot bind the same port, semaphore, whatever if someone else is using it. That's the definition of having addressable IPC.

                I don't think arguing otherwise is defensible or reasonable.

                Having a concern that a network service might bind the same port as an other copy of the same network service deployed on the same target by another host is an entirely reasonable concern.

                I think we're getting off into the woods here with an arbitrary 'die on this hill' point about semantics which I really don't care about.

                TLDR: If you ship an IPC binary, you have to pay attention to these concerns. Pretending otherwise means you're not doing it properly.

                It's not an idle concern; it's a real concern that real actual application developers have to worry about, in real world situations.

                I've had to worry about it.

                I think it's not unfair to think it's going to be more problematic than the current, very easy, embedded story, and it is a concern that simply does not exist when you embed a library instead of communicating using IPC.

            • jchw 4 days ago

              > It is possible not to conflict with existing parallel deployments, but depending on your IPC mechanism, it is by no means assured when you're not forking and are instead launching an external process.

              Sure, some IPC approaches can run into issues, such as using TCP connections over loopback. However, I'm describing an approach that should never conflict since the resources that are shared are inherited directly, and since the binary would be embedded in your application bundle and not shared with other programs on the system. A similar example would be language servers which often work this way: no need to worry about conflicts between different instances of language servers, different language servers, instances of different versions of the same language server, etc.

              There's also some precedent for this approach since as far as I understand it, it's also what the Go-based ESBuild tool does[1], also popular in the Node.JS ecosystem (it is used by Vite.)

              > For example, it could by default bind a specific default port. This would work in the 'naive' situation where the client doesn't specify a port and no parallel instances are running. ...but if two instances are running, they'll both try to use the same port. Arbitrary applications can connect to the same port. Maybe you want to share a single compiler service instance between client apps in some cases?

              > Not conflicting is not a property of parallel binary deployment and communication via IPC by default.

              > IPC is, by definition intended to be accessible by other processes.

              Yes, although the set of processes which the IPC mechanism is designed to be accessible by can be bound to just one process, and there are cross-platform mechanisms to achieve this on popular desktop OSes. I can not speak for why one would choose TCP over stdin/stdout, but, I don't expect that tsc will pick a method of IPC that is flawed in this way, since it would not follow precedent anyway. (e.g. tsserver already uses stdio[2].)

              > Jupyter kernels for example are launched with a specified port and a secret by cli argument if I recall correctly.

              > However, you'd have to rely on that mechanism being built into the typescript compiler service.

              > ...ie. it's a bit complicated right?

              > Worth it for the speedup? I mean, sure. Obviously there is a reason people don't embed postgres. ...but they don't try to ship a copy of it along side their apps either (usually).

              Well, I wouldn't honestly go as far as to say it's complicated. There's a ton of precedent for how to solve this issue without any conflict. I can not speak to why Jupyter kernels use TCP for IPC instead of stdio, I'm very sure they have reasons why it makes more sense in their case. For example, in some use cases it could be faster or perhaps just simpler to have multiple channels of communication, and doing this with multiple pipes to a subprocess is a little more complicated and less portable than stdio. Same for shared memory: You can always have a protocol to negotiate shared memory across some serial IPC mechanism, but you'll almost always need a couple different shared memory backends, and it adds some complexity. So that's one potential reason.

              (edit: Another potential reason to use TCP sockets is, of course, if your "IPC" is going across the network sometimes. Maybe this is of interest for Jupyter, I don't know!)

              That said, in this case, I think it's a non-issue. ESBuild and tsserver demonstrate sufficiently that communication over stdio is sufficient for these kinds of use cases.

              And of course, even if the Jupyter kernel itself has to speak the TCP IPC protocols used by Jupyter, it can still subprocess a theoretical tsc and use stdio-based IPC. Not much complexity to speak of.

              Also, unrelated, but it's funny you should say that about postgres, because actually there have been several different projects that deliver an "embeddable" subset of postgres. Of course, the reasoning for why you would not necessarily want to embed a database engine are quite a lot different from this, since in this case IPC is merely an implementation detail whereas in the database case the network protocol and centralized servers are essentially the entire point of the whole thing.

              [1]: https://github.com/evanw/esbuild/blob/main/cmd/esbuild/stdio...

              [2]: https://github.com/microsoft/TypeScript/wiki/Standalone-Serv...

    • jillyboel 4 days ago

      Javascript is also quite difficult to embed in other applications. So not much has changed, except it's no longer your language of choice.

      • demurgos 4 days ago

        TypeScript compiles to JavaScript. It means both `tsc` and the TS program can share the same platform today.

        With a TSC in Go, it's no longer true. Previously you only had to figure out how to run JS, now you have to figure out both how to manage a native process _and_ run the JS output.

        This obviously matters less for situations where you have a clear separation between the build stage and runtime stage. Most people complaining here seem to be talking about environments were compilation is tightly integrated with the execution of the compiled JS.

  • aylmao 4 days ago

    This is awesome. Thanks to you and all the TypeScript team for the work they put on this project! Also, nice to see you here, engaging with the community.

    Porting to Go was the right decision, but part of me would've liked to see a different approach to solve the performance issue. Here I'm not thinking about the practicality, but simply about how cool it would've been if performance had instead been improved via:

    - porting to OCaml. I contributed to Flow once upon a time, and a version of TypeScript in OCaml would've been huge in unifying the efforts here.

    - porting to Rust. Having "official" TypeScript crates in rust would be huge for the Rust javascript-tooling ecosystem.

    - a new runtime (or compiler!). I'm thinking here an optional, stricter version of TypeScript that forbids all the dynamic behaviours that make JavaScript hard to optimize. I'm also imagining an interpreter or compiler that can then use this stricter TypeScript to run faster or produce an efficient native binary, skipping JavaScript altogether and using types for optimization.

    This last option would've been especially exciting since it is my opinion that Flow was hindered by the lack of dogfooding, at least when I was somewhat involved with the project. I hope this doesn't happen in the TypeScript project.

    None of these are questions, just wanted to share these fanciful perspectives. I do agree Go sounds like the right choice, and and in any case I'm excited about the improvement in performance and memory usage. It really is the biggest gripe I have with TypeScript right now!

    • muglug 4 days ago

      Not Daniel, but I've ported a typechecker from PHP to Rust (with some functional changes) and also tried working with the official Hack OCaml-based typechecker (a precursor to Flow).

      Rust and OCaml are _maybe_ prettier to look at, but for the average TypeScript developer Go is a much more understandable target IMO.

      Lifetimes and ownership are not trivial topics to grasp, and they add overhead (as discussed here: https://github.com/microsoft/typescript-go/discussions/411) that not all contributors might grasp immediately.

  • textlapse 4 days ago

    I am curious why dotnet was not considered - it should run everywhere Go does with added NativeAoT too, so I am especially curious given the folks involved ;)

    (FWIW, It must have been a very well thought out rationale.)

    Edit: watched the revenant clip from the GH discussion- makes sense. Maybe push NativeAoT to be as good?

    I am (positively) surprised Hejlsberg has not used this opportunity to push C#: a rarity in the software world where people never let go of their darlings. :)

  • AshleysBrain 4 days ago

    Well-optimized JavaScript can get to within about 1.5x the performance of C++ - something we have experience with having developed a full game engine in JavaScript [1]. Why is the TypeScript team moving to an entirely different technology instead of working on optimizing the existing TS/JS codebase?

    [1] https://www.construct.net/en

    • do_not_redeem 4 days ago

      Well-optimized JavaScript can, if you jump through hoops like avoiding object creation and storing your data in `Uint8Array`s. But idiomatic, maintainable JS simply can't (except in microbenchmarks where allocations and memory layout aren't yet concerns).

      In a game engine, you probably aren't recreating every game object from frame to frame. But in a compiler, you're creating new objects for every file you parse. That's a huge amount of work for the GC.

      • AshleysBrain 4 days ago

        I'd say that our JS game engine codebase is generally idiomatic, maintainable JS. We don't really do anything too esoteric to get maximum performance - modern JS engines are phenomenal at optimizing idiomatic code. The best JS performance advice is to basically treat it like a statically typed language (no dynamically-shaped objects etc) - and TS takes care of that for you. I suppose a compiler is a very different use case and may do things like lean on the GC more, but modern JS GCs are also amazing.

        Basically I'd be interested to know what the bottlenecks in tsc are, whether there's much low-hanging fruit, and if not why not.

        • Yoric 4 days ago

          Note that games are based on main loops + events, for which JITs are optimized, while compilers are typically single run-to-completion, for which JITs aren't.

          So this might be a very different performance profile.

          *edit* I had initially written "single-pass", but in the context of a compiler, that's ambiguous.

      • immibis 4 days ago

        In other words you write asm.js, which is a textual form of WebAssembly that is also valid Javascript, and if your browser has an asm.js JIT compiler - which it doesn't because it was replaced by WebAssembly.

    • RyanCavanaugh 4 days ago

      Our best estimate for how much faster the Go code is (in this situation) than the equivalent TS is ~3.5x

      In a situation like a game engine I think 1.5x is reasonable, but TS has a huge amount of polymorphic data reading that defeats a lot of the optimizations in JS engines that get you to monomorphic property access speeds. If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.

      • norswap 4 days ago

        I used to work on compilers & JITs, and 100% this — polymorphic calls is the killer of JIT performance, which is why something native is preferable to something that JIT compiles.

        Also for command-line tools, the JIT warmup time can be pretty significant, adding a lot to overall command-to-result latency (and in some cases even wiping out the JIT performance entirely!)

      • spankalee 4 days ago

        > If JS engines were better at monomorphizing access to common subtypes across different map shapes maybe it'd be closer, but no engine has implemented that or seems to have much appetite for doing so.

        I really wish JS VMs would invest in this. The DOM is full of large inheritance hierarchies, with lots of subtypes, so a lot of DOM code is megamorphic. You can do tricks like tearing off methods from Element to use as functions, instead of virtual methods as usual, but that quite a pain.

    • jerf 4 days ago

      "Well optimized Javascript", and more generally, "well-optimized code for a JIT/optimizer for language X", is a subset of language X, is an undefined subset of language X, is a moving subset of language X that is moving in ways unrelated to your project, is actually multiple such subsets at a minimum one per JIT and arguably one per version of JIT compilers, and is generally a subset of language X that is extremely complicated (e.g., you can lose optimization if your arrays grow in certain ways, or you can non-locally deoptimize vast swathes of your code because one function call in one location happened to do one thing the JIT can't handle and it had to despecialize everything touching it as a result) such that trying to keep a lot of developers in sync with the requirements on a large project is essentially infeasible.

      None of these things say "this is a good way to build a large compiler suite that we're building for performance".

    • jchw 4 days ago

      Please note that compilers and game engines have extremely different needs and performance characteristics—and also that statements like "about 1.5x the performance of C++" are virtually meaningless out-of-context. I feel we've long passed this type of performance discussion by and could do with more nuanced and specific discussions.

    • johnfn 4 days ago

      Who wants to spend all their time hand-tuning JS/TS when you can write the same code in Go, spend no time at all optimizing it, and get 10x better results?

    • dagw 4 days ago

      Why is the TypeScript team moving to an entirely different technology

      A few things mentioned in an interview:

      Cannot build native binaries from TypeScript

      Cannot as easily take advantage of concurrency in TypeScript

      Writing fast TypeScript requires you to write things in a way that isn't 'normal' idiomatic TypeScript. Easier to onboard new people onto a more idiomatic codebase.

      • dboreham 4 days ago

        The message I hear is: don't use JS, don't use async. Music to my ears.

    • grandempire 4 days ago

      What kind of C++ and what kind of JS?

      - C++ with thousands of tiny objects and virtual function calls? - JavaScript where data is stored in large Int32Array and does operations on it like a VM?

      If you know anything about how JavaScript works, you know there is a lot of costly and challenging resource management.

    • Cthulhu_ 4 days ago

      While Go can be considered entirely different technology, I'd argue that Go is easy enough to understand for the vast majority of software developers that it's not too difficult to learn.

      (disclaimer: I am a biased Go fan)

      • baq 4 days ago

        It had been very explicitly designed with this goal. The idea was to make a simpler Java which is as easy as possible to deploy and as fast as possible to commute and by these measures is a resounding success.

    • internetter 4 days ago

      Sometimes, the time required to optimize is greater than the time required to rewrite.

    • tracker1 4 days ago

      Well-optimized JS isn't the only point of operation here. There's a LOT of exchange, parsing and processing that interacts with the File System and the JS engine itself. It isn't just a matter of loading a JS library and letting it do its thing. Every call that crosses the boundaries from JS runtime to the underlying host environment has a cost. This is multiplied across potentially many thousands of files.

      Just going from ESLint to Biome is more than a 10x improvement... it's not just 1.5x because it's not just the runtime logic at play for build tools.

    • Analemma_ 4 days ago

      I'm not sure how it is in Construct, but IME "well-optimized" JavaScript quickly becomes very difficult to read, debug, and update, because you're relying heavily on runtime implementation quirks and micro-optimizations that make a hash of code cleanliness. Even you can hit close to native performance, the native equivalent usually has much more idiomatic code. The tsc team needs to balance performance of the compiler against keeping the codebase maintainable, which is especially vital for such a core piece of web infrastructure as TypeScript.

    • _benton 4 days ago

      Are you comparing perfectly written JS to poorly written C++?

    • anonymoushn 4 days ago

      It sounds like the C++ is not well-optimized then?

    • nicoburns 4 days ago

      Numeric code can, but compilers have to do a lot of string manipulation which is almost impossible to optimise well in JS.

    • surajrmal 4 days ago

      How does that scale with number of threads?

    • pizlonator 4 days ago

      Your JS code is way uglier than their Go code, if you're doing those kinds of shenanigans.

      JS is 10x-100x slower than native languages (C++, Go, Rust, etc) if you write the code normally (i.e. don't go down the road of uglifying your JS code to the point where it's dramatically less pleasant to work with than the C++ code you're comparing to).

      • cxr 4 days ago

        There's no such thing as a native language unless you're talking about machine code.

        It's kind of annoying how even someone like Hejlsberg is throwing around words like "native" in such an ambiguous, sloppy, and prone-to-be-misleading way on a project like this.

        "C++" isn't native. The object code that it gets compiled to, large parts of which are in the machine's native language, is.

        Likewise "TypeScript" isn't non-native in any way that doesn't apply to any other language. The fact that tsc emits JS instead of the machine's native language is what makes TypeScript programs (like tsc itself) comparatively slow.

        It's the compilers that are the important here, not the languages. (The fact that the TypeScript team was committed to making sure the typescript-go compiler is the same (nearly line-for-line equivalent) to the production version of the TypeScript compiler written in itself really highlights this.)

  • pjmlp 4 days ago

    Why not AOT compiled C#, given the team's historical background?

    • dagw 4 days ago

      There is an interview with Anders Hejlsberg here: https://www.youtube.com/watch?v=ZlGza4oIleY

      The question comes up and he quickly glosses over it, but by the sound of it he isn't impressed with the performance or support of AOT compiled C# on all targeted platforms.

      • bokwoon 4 days ago

        https://www.youtube.com/watch?v=10qowKUW82U

        [19:14] why not C#?

        Dimitri: Was C# considered?

        Anders: It was, but I will say that I think Go definitely is -- it's, I'd say, the lowest-level language we can get to and still have automatic garbage collection. It's the most native-first language we can get to and still have automatic GC. In C#, it's sort of bytecode first, if you will; there is some ahead-of-time compilation available, but it's not on all platforms and it doesn't have a decade or more of hardening. It was not geared that way to begin with. Additionally, I think Go has a little more expressiveness when it comes to data structure layout, inline structs, and so forth. For us, one additional thing is that our JavaScript codebase is written in a highly functional style -- we use very few classes; in fact, the core compiler doesn't use classes at all -- and that is actually a characteristic of Go as well. Go is based on functions and data structures, whereas C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#. That transition would have involved more friction than switching to Go. Ultimately, that was the path of least resistance for us.

        Dimitri: Great -- I mean, I have questions about that. I've struggled in the past a lot with Go in functional programming, but I'm glad to hear you say that those aren't struggles for you. That was one of my questions.

        Anders: When I say functional programming here, I mean sort of functional in the plain sense that we're dealing with functions and data structures as opposed to objects. I'm not talking about pattern matching, higher-kinded types, and monads.

        [12:34] why not Rust?

        Anders: When you have a product that has been in use for more than a decade, with millions of programmers and, God knows how many millions of lines of code out there, you are going to be faced with the longest tail of incompatibilities you could imagine. So, from the get-go, we knew that the only way this was going to be meaningful was if we ported the existing code base. The existing code base makes certain assumptions -- specifically, it assumes that there is automatic garbage collection -- and that pretty much limited our choices. That heavily ruled out Rust. I mean, in Rust you have memory management, but it's not automatic; you can get reference counting or whatever you could, but then, in addition to that, there's the borrow checker and the rather stringent constraints it puts on you around ownership of data structures. In particular, it effectively outlaws cyclic data structures, and all of our data structures are heavily cyclic.

        (https://www.reddit.com/r/golang/comments/1j8shzb/microsoft_r...)

        • vlovich123 3 days ago

          I wonder if they explored using a Gc like https://github.com/Manishearth/rust-gc with Rust. I think that probably removes all the borrow checker / cycle impedance mismatch while providing a path to remove Gc from the critical path altogether. Of course the Rust Gc crates are probably more immature, maybe slower, than Go’s so if there’s no path to getting rid of cycles as part of down-the-road perf optimization, then Go makes more sense.

        • DeathArrow 4 days ago

          >C# is heavily OOP-oriented, and we would have had to switch to an OOP paradigm to move to C#

          They could have used static classes in C#.

      • dimitropoulos 4 days ago

        he went into more detail about C# in this one: https://youtu.be/10qowKUW82U?t=1154s

        • jodrellblank 4 days ago

          He says:

          - C# Ahead of Time compiler doesn't target all the platforms they want.

          - C# Ahead of Time compiler hasn't been stressed in production as many years as Go.

          - The core TypeScript compiler doesn't use any classes; Go is functions and datastructures whereas C# is heavily OOP, so they would have to switch paradigms to use C#.

          - Go has better control of low level memory layouts.

          - Go was ultimately the path of least resistance.

    • Cthulhu_ 4 days ago

      I'm not involved in the decisions, but don't C# applications have a higher startup time and memory usage? These are important considerations for a compiler like this that needs to start up and run fast in e.g. new CI/CD boxes.

      For a daemon like an LSP I reckon C# would've worked.

      • rob74 4 days ago

        Yes, in fact that's one of the main reasons given in the two linked interviews: Go can generate "real" native executables for all the platforms they want to support. One of the other reasons is (paraphrasing) that it's easier to port the existing mostly functional JS code to Go than to C#, which has a much more OOP style.

      • neonsunset 4 days ago

        Not when compiled by NativeAOT. It also produces smaller binaries than Go and has better per-dependency scalability (due to metadata compression, pointer-rich section dehydration and stronger reachability analysis). This also means you can use F# too for this instead, which is excellent for langdev (provided you don't use printf "%A" which is incompatible which is a small sacrifice).

        • kokada 4 days ago

          What is the cross compilation support for NativeAOT though? This is one of the things that Go shines (as long as you don't use CGO, that seems perfectly plausible in this project), and while I don't think it would be a deal breaker it probably makes things a lot easier.

          • neonsunset 4 days ago

            What is the state of WASM support in Go though? :)

            I doubt the ability to cross-compile TSC would have been a major factor. These artifacts are always produced on dedicated platforms via separate build stages before publishing and sign-off. Indeed, Go is better at native cross-compilation where-as .NET NativeAOT can do only do cross-arch and limited cross-OS by tapping into Zig toolchain.

            • 9rx 3 days ago

              > What is the state of WASM support in Go though? :)

              The gc compiler considers it a first-class build target.

              Like C#, the binary tends to be on the larger side[1], which makes it less than ideal for driving a marketing website, but that's not really a problem here. Installation of tools before use is already the norm.

              [1] The tinygo compiler can produce very small WASM binaries, comparable to C, albeit with some caveats (surprisingly, the extra data gc produces isn't just noise).

            • kokada 4 days ago

              > What is the state of WASM support in Go though? :)

              I am sure it is good enough that the team decided to choose Go either way OR it is not important for this project.

              > I doubt the ability to cross-compile TSC would have been a major factor.

              I never said it was a major factor (I even said "I don't think it would be a deal breaker"), but it is a factor nonetheless. It definitely helps a lot during cross-platform debugging since you don't need to setup a whole toolchain just to test a bug in another platform, instead you can simple build a binary on your development machine and send it to the other machine.

              But the only reason I asked this is because I was curious really, no need to be so defensive.

      • pjmlp 4 days ago

        Native AOT exists, and C# has many C++ like capabilities, so not at all.

        • reactordev 4 days ago

          It exists but isn’t the same as a natively compiled binary. A lot gets packed into an AOT binary for it to work. Longer startup times, more memory, etc.

          • pjmlp 4 days ago

            Just like Go, there is no magic here.

            Where do you think Go gets those chubby static linked executables from?

            That people have to apply UPX on top.

            • reactordev 4 days ago

              Go’s static binaries are orders of magnitude smaller than .Net’s static binaries. However, you are right, all binaries have some bloat in order to make them executable.

              • metaltyphoon 4 days ago

                This is flat out incorrect if you are doing AOT in C#

                • reactordev 3 days ago

                  How? What I said was fact. All executables have some bloat that makes them executables. Golang’s being the smaller. Whether it’s shared lib pointer or static…

    • rob74 4 days ago

      Seeing that Hejlsberg started out with Turbo Pascal and Delphi, and that Go also has a lot of Pascal-family heritage, he might hold some sympathy for Go as well...

      • pjmlp 4 days ago

        Yes there is that irony, however when these kind of decisions are made, by folks with historical roots on how .NET and C# came to be, then .NET team cannot wonder why .NET keeps lagging adoption versus other ecosystems, on companies that aren't traditional Microsoft shops.

    • tgv 4 days ago

      Not involved, but there's a faq in their repo, and this answers your question, perhaps, a bit: https://github.com/microsoft/typescript-go/discussions/411

      • pjmlp 4 days ago

        Thanks, but it really doesn't clarify why a team with roots on the .NET ecosystem decided C#/Native AOT isn't fit for purpose.

        • spankalee 4 days ago

          I don't understand what Anders' past involvement with C# has to do with this. Would the technical evaluation be different if done by Anders vs someone else?

          • mike_hearn 4 days ago

            C# and Go are direct competitors and the advantages of Go that were cited are all features of C# as well, except the lack of top level functions. That's clearly not an actual problem: you can just define a class per file and make every method static, if that's how you like to code. It doesn't require any restructuring of your codebase. There's also no meaningful difference in platform support, .NET AOT supports Win/Mac/Linux on AMD64/ARM i.e. every platform a developer might use.

            He clearly knows all this so the obvious inference is that the decision isn't really about features. The most likely problem is a lack of confidence in the .NET team, or some political problems/bad blood inside Microsoft. Perhaps he's tried to use it and been frustrated by bugs; the comment about "battle hardened" feel like where the actual rationale is hiding. We're not getting the full story here, that's clear enough.

            I'm honestly surprised Microsoft's policies allowed this. Normally companies have rules that require dogfooding for exactly this reason. Such a project is not terribly urgent, it has political heft within Microsoft. They could presumably have got the .NET team to fix bugs or make optimizations they need, at least a lot easier than getting the Go team to do it. Yet they chose not to. Who would have any confidence in adoption of .NET for performance sensitive programs now? Even the father of .NET doesn't want to use it. Anyone who wants to challenge a decision to adopt it can just point at Microsoft's own actions as evidence.

            • neonsunset 4 days ago

              Thanks, this is a good way to frame it, someone else also phrased similar sentiment which I'm in total agreement with: https://x.com/Lon/status/1899527659308429333

              It is especially jarring given that they are a first-party customer who would have no trouble in getting necessary platforms supported or projects expedited (like NativeAOT-LLVM-WASM) in .NET. And the statements of Anders Hejlsberg himself which contradict the facts about .NET as a platform make this even more unfortunate.

              • mike_hearn 4 days ago

                I wonder if there's just some cultural / generational stuff happening there too. The fact that the TS compiler is all about compiling a highly complex OOP/functional hybrid language yet is said to use neither objects nor FP seems rather telling. Hejlsberg is famous for designing object oriented languages (Delphi, C#) but the Delphi compiler itself was written largely in assembly, and the C# compiler was for a very long time written in C++ iirc. It's possible that he just doesn't personally like working in the sort of languages he gets paid to design.

                There's an interesting contrast here with Java, where javac was ported to Java from C++ very early on in its lifecycle. And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too. Whereas in the .NET world Roslyn took quite a long time to come along, it wasn't until .NET 6, and of course MS rejected it from Windows more or less entirely for the same sorts of rationales as what Anders provides here.

                • neonsunset 4 days ago

                  > Roslyn

                  It was introduced back then with .NET Framework 4.6 (C# 6) - a loong time ago (July 2015). The OSS .NET has started with Roslyn from the very beginning.

                  > And the Java AOT compiler (native image) is not only fully written in Java itself, everything from optimizations to code generation, but even the embedded runtime is written in Java too.

                  NativeAOT uses the same architecture. There is no C++ besides GC and pre-existing compiler back-end (both ILC and RyuJIT drive it during compilation process). Much like GraalVM's Native Image, the VM/host, type system facilities, virtual/interface dispatch and everything else it could possibly need is implemented in C# including the linker (reachability analysis/trimming, kind of like jlink) and optimizations (exact devirtualization, cctor interpreter, etc.).

                  In the end, it is the TypeScript team members who worked on this port, not Anders Hejlsberg himself, which is my understanding. So we need to take this into account when judging what is being communicated.

                  • mike_hearn 4 days ago

                    Ah C# 6 not .NET 6, thanks for the correction. Cool to hear that the NativeAOT stuff follows the same path.

            • i_s 4 days ago

              Yea, I came here to say the same thing. Anders' reasons for not going with C# all seem either dubious or superficial and easily worked around.

              First he mentions the no classes thing. It is hard to see how that would matter even for automated porting, because like you said, he could just use static classes, and even do a static using statement on the calling side.

              Another one of his reasons was that Go was good at processing complex graphs, but it is hard to imagine how Go would be better at that than C#. What language feature that Go has, but C# does not supports that? I don't think anyone will be able to demonstrate one. This distinction makes sense for Go vs Rust, but not for Go vs C#.

              As for the platform / AOT argument, I don't know as much about that, but I thought it was supposed to be possible now. If it isn't, it seems like it would be better for Microsoft to beef that up than to allow a vote of no confidence to be cast like this.

          • pjmlp 4 days ago

            Yes, when the author of the language feels it is unfit for purpose, it is a different marketing message than a random dude on the Internet on his new startup project.

        • vessenes 4 days ago

          Pure speculation, but C# is not nearly the first class citizen that go binaries are when you look at all possible deployment targets. The “new” Microsoft likely has some built-in bias against “embrace and extend” architectural and business decisions for developers. Overall this doesn’t seem like a hard choice to me.

          Cue rust devotees in 3, 2, ..

  • jasonthorsness 4 days ago

    I write a lot of Go and a decent amount of TypeScript. Was there anything you found during this project that you found particularly helpful/nice in Go, vs. TypeScript? Or was there anything about Go that increased the difficulty or required a change of approach?

  • jherdman 4 days ago

    I'd be curious to hear about the politics and behinds the scenes of this project. How did you get buy-in? What were some of the sticking points in getting this project off of the ground? When you mention that many other languages were used to spike the new compiler, were there interesting learnings?

  • culi 4 days ago

    > While we’re not yet feature-complete

    This is a big concern to me. Could you expand on what work is left to do for the native implementation of gsc? In particular, can you make an argument why that last bit of work won't reduce these 10x figures we're seeing? I'm worried the marketing got ahead of the engineering

    • sashank_1509 4 days ago

      It’s fine, if it’s 2x faster after being feature complete, I don’t really mind. It still is a free speedup to all existing code-bases. Developers don’t need to anything than install the latest version of TypeScript I presume

      • culi 3 days ago

        With an added dependency on golang. Might have some consequences for certain people's build process

  • eknkc 4 days ago

    I feel like you'll need to provide a wasm binary for browser environments and maybe as a fallback in node itself. Last time I checked, Go really struggles to perform when targeting wasm. This might be the only reason I'd like to see it in Rust but I'm still glad you went with Go.

    Are there any insights on the platform decision?

    • jchw 4 days ago

      Honestly, the choice seems fine to me: the vast majority of users are not compiling huge TypeScript projects in the browser. If you're using Vite/ESBuild, you're already using a Go-based JS toolchain, and last I checked Vite was pretty darn popular. I don't suspect there will be a huge burden for things like playground; given the general performance uplift that the Go tsc implementation already gets, it may in fact be faster even after paying the Wasm tax. (And even if it isn't, it should be more than fine for playground anyways.)

      • merb 4 days ago

        I‘m pretty sure that a lot of vite users with hot reload will run tsc inside the browser (tanstack, react-router)

        • jchw 4 days ago

          I am not a Vite expert, however, when running Vite in dev mode, I can see two things:

          - There is an esbuild process running in the background.

          - If I look at the JavaScript returned to the browser, it is transpiled without any types present.

          So even though the URLs in Vite dev mode look like they're pointing to "raw" TypeScript files, they're actually transpiled JavaScript, just not bundled.

          I could be incorrect, of course, but it sure seems to me like Vite is using ESBuild on the Node.JS side and not tsc on the web browser side.

  • gwbas1c 4 days ago

    Thanks for answering questions.

    One thing I'm curious about: What about updating the original Typescript-based compiler to target WASM and/or native code, without needing to run in a Javascript VM?

    Was that considered? What would (at a high level) the obstacles be to achieving similar performance to Golang?

    Edit: Clarified to show that I indicate updating the original compiler.

    • rafram 4 days ago

      It's unlikely that you would get much performance benefit from AOT compiling a TypeScript codebase. (At least not with a ton of manual optimization of the native code, and if you're going to do that, why not just rewrite in a native-first language?)

      JavaScript, like other dynamic languages, runs well with a JIT because the runtime can optimize for hotspots and common patterns (e.g. this method's first argument is generally an object with this shape, so write a fast path for that case). In theory you could write an AOT compiler for TypeScript that made some of those inferences at compile time based on type definitions, but

      (a) nobody's done that

      (b) it still wouldn't be as fast as native, or much faster than JIT

      (c) it would be limited - any optimizations would die as soon as you used an inherently dynamic method like JSON.parse()

      • gwbas1c 4 days ago

        So basically, TypeScript as a language doesn't allow compiling to as as efficient machine code as Golang? (Edit) And I assume it's not practical to alter the language in a way that this kind of information can be added. (Such as adding a typed version of JSON.parse()).

  • h1fra 4 days ago

    Amazing news, but I'm wondering what will happen to Monaco editor and all the SaaS that use typescript in the browser?

    • dataviz1000 4 days ago

      Not sure if it does but the video linked in the post might answer your question? I think he is compiling vscode which includes Monaco editor which is where they are getting 10x faster stat. (I might be wrong here.) [0]

      [0] https://youtu.be/pNlq-EVld70?feature=shared&t=112

      • h1fra 4 days ago

        Yeah I saw that but will they maintain a browser compatible version is another question

        • dataviz1000 4 days ago

          Ah, inception compiling. The issue isn't compiling the Monaco editor, but rather will the Monaco editor compile TypeScript 7 in the browser?

          That is a good question.

  • Etheryte 4 days ago

    This might be an oddly specific question, but do you think performance improvements like this might eventually lead to features like partial type argument inference in generics? If I recall correctly off the top of my head, performance was one of the main reasons it was never implemented.

  • dimitropoulos 4 days ago

    thank you, to both of you, for so many years of groundbreaking work. you've both been on the project for, what, 11 years now? such legends.

  • nozzlegear 4 days ago

    > You can also tune in to the Discord AMA mentioned in the blog this upcoming Thursday.

    Will the questions and answers be posted anywhere outside of Discord after it's concluded?

    • AshleyGrant 4 days ago

      Daniel, please make this a priority. Post the Q&A transcript to GitHub, at least.

  • ackfoobar 4 days ago

    Will we still have compiler plugins? What will this mean for projects like ts-patch?

  • umvi 4 days ago

    Since the new tsc is written in go, will I be able to pull it into my go web server as a middleware to dynamically transpile ts?

    • DanRosenwasser 4 days ago

      We'll be working on an API that ideally can be used through any language - that would be our preferred means of consuming the new codebase.

  • jauntywundrkind 4 days ago

    What is the forward paths available for efforts like the TS Playground under Typescript 7 (native)?

    One of the nice advantages of js is that it can run so many places. Will TypeScript still be able to enjoy that legacy going forward, or is native only what we should expect in 7+?

    • DanRosenwasser 4 days ago

      We anticipate that we will eventually get a playground working on the new native codebase. We know we'll likely compile down to WebAssembly, but a lot of how it gets integrated will depend on what the API looks like. We're currently giving a lot of thought to that, but we have good ideas. https://github.com/microsoft/typescript-go/discussions/455

      • Tadpole9181 4 days ago

        Will this be a prerequisite of the 7.0 release?

  • phpnode 4 days ago

    This is very exciting! I'm curious if this move eventually unlocks features that have been deemed too expensive/slow so far, e.g. typing `ReactElement` more accurately, typing `TemplateStringsArray` etc

  • titzer 4 days ago

    I'm curious about the choice of Go to develop the new toolchain. Was the support for parallelism/concurrency a factor in the decision?

  • timmg 3 days ago

    Does the mean the future toolset will be less reliant on NPM (etc)?

    Maybe I'm unique, but I prefer to not have the compiler tied up with a paryicular dependency-management/library system.

  • ksec 4 days ago

    Is 10x a starting point or could we expect even more improvements in the future?

  • jakub_g 4 days ago

    Hi Daniel! What's your stance on support for yarn pnp?

    • DanRosenwasser 4 days ago

      pnp is still very cool, and it would be great if we can find a better API story that works well with pnp!

  • imbnwa 4 days ago

    Will the refactor possibly be an occasion for ironing out a spec

  • pbreit 4 days ago

    Amazing!! I did not see timing. When might we see in VS Code? Edge?

  • loevborg 4 days ago

    Congrat on the announcement, this is a great achievement!

  • sbjs 4 days ago

    Your patience with Michael Saboff is incredible.

  • felixrieseberg 4 days ago

    Daniel, congrats! I'm _so_ excited about everything y'all have achieved in the last few years.

  • ezekg 4 days ago

    When can we just replace the JS runtime with TS and skip the compiler altogether? Start fresh, if you will.

  • devit 4 days ago

    [flagged]

    • umvi 4 days ago

      > inexpressive type system

      Simplicity is a feature, not a bug. Overly expressive languages become nightmares to work with and reason about (see: C++ templates)

      Go's compilation times are also extremely fast compared to Rust, which is a non-negligible cost when iterating on large projects.

  • polskibus 4 days ago

    Have you considered a closer to metal language to implement the compiler in like c or rust ? Have you evaluated further perf improvements ?

    • davedx 4 days ago

      I don't think c or rust are really 'closer to the metal' than golang (what they're using)

      • cube00 4 days ago

        Considering Go is the only language with a garbage collector out of the three languages you mentioned, I'm not sure how you reach the conclusion they're all as close to the metal.

        C and Rust both have predictable memory behaviour, Go does not.

        • gwbas1c 4 days ago

          When I read the article it was very clear, due to the compiler's in-memory graphs, that they needed a GC.

          (IE, as opposed to reference counting, where if you have cyclic loops, you need to manually go in and "break" the loop so memory gets reclaimed.)

          • tuveson 4 days ago

            > When I read the article it was very clear, due to the compiler's in-memory graphs, that they needed a GC.

            It's actually pretty easy to do something like this with C, just using something like an arena allocator, or honestly, leaking memory. I actually wrote a little allocator yesterday that just dumps memory into a linkedlist, it's not very complicated: http://github.com/danieltuveson/dsalloc/

            You allocate wherever you want, and when you're done with the big messy memory graph, you throw it all out at once.

            There are obviously a lot of other reasons to choose go over C, though (easier to learn, nicer tooling, memory safety, etc).

            • gwbas1c 4 days ago

              I get the impression they'd use smart pointers (C++) or Rc/Arc (Rust)

        • nasretdinov 4 days ago

          Go isn't that bad in terms of memory predictability to be honest. It generally has roughly 100% overhead in terms of memory usage compared to no GC. This can be reduced by using GOGC env variable, at the cost of worse performance if not careful.

  • conartist6 4 days ago

    Hi Daniel!

    Really interesting news, and uniquely dismaying to me as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem.

    My question has to do with Ryan's statement:

    > We also considered hybrid approaches where certain components could be written in a native language, while keeping core typechecking algorithms in JavaScript

    I've experimented deeply in this area (maybe 15k hours invested in BABLR so far) and what I've found is that it's richly rewarding. Javascript is fast enough for what is needed, and its ability to cache on immutable data can make it lightning fast not through doing more work faster, but by making it possible to do less work. In other words, change the complexity class not the constant factor.

    Is this a direction you investigated? What made you decide to try to move sideways instead of forwards?

    • moogly 4 days ago

      > as someone who is fighting tooth and claw to keep JS language tooling in the JS ecosystem

      Have you considered the man-years and energy you're making everyone waste? Just as an example, I wonder what the carbon footprint of ESLint has been over the years...

      Now, it pales in comparison to Python, but still...

      • conartist6 4 days ago

        I'm no more thrilled than you at the cost of running ESLint, but using a high-level language doesn't need to mean being wasteful of resources.

        TS currently wastes tons of resources (most especially peoples' time) by not being able to share its data and infrastructure with other tools and ecosystems, but while there would be much bigger wins from tackling the systemic problem, you wouldn't be able to say something as glib as "TS is 10x faster". Only the work that can be distilled to a metric is done now, because that's how to get a promotion when you work for a company like Microsoft

        • Timon3 4 days ago

          If I could choose between Typescript speeding up 10x or all the surrounding tooling speeding up 20x, I'd take Typescript in a heartbeat. Slow type checking is the biggest pain point in my daily dev cycle.

          Thank you Typescript team for chasing those promotions!

  • hedora 4 days ago

    Go is an extremely strange choice, given the ecosystem you're targeting. I've got quite a bit of experience in it, TS, Rust and C++. I'd pick any of those for productivity and (in the case of C++ and Rust, thread-safety) over Go, simply because Go's type system is so impoverished.

    From a performance perspective, I'd expect C++ and Rust to be much easier targets too, since I've seen quite a few industrial Go services be rewritten in C++/Rust after they fail to meet runtime performance / operability targets.

    Wasn't there a recent study from Google that came to the same conclusion? (They see improved productivity for Go with junior programmers that don't understand static typing, but then they can never actually stabilize the resulting codebase.)

bcherny 4 days ago

Fast dev tools are awesome and I am glad the TS team is thinking deeply about dev experience, as always!

One trade off is if the code for TS is no longer written in TS, that means the core team won’t be dogfooding TS day in and day out anymore, which might hurt devx in the long run. This is one of the failure modes that hurt Flow (written in OCaml), IMO. Curious how the team is thinking about this.

  • DanRosenwasser 4 days ago

    Hey bcherny! Yes, dog-fooding (self-hosting) has definitely been a huge part in making TypeScript's development experience as good as it is. The upside is the breadth of tests and infrastructure we've already put together to watch out for regressions. Still, to supplement this I think we will definitely be leaning a lot on developer feedback and will need to write more TypeScript that may not be in a compiler or language service codebase. :D

    • rattray 4 days ago

      Interesting! This sounds like a surprisingly hard problem to me, from what I've seen of other infra teams.

      Does that mean more "support rotations" for TS compiler engineers on GitHub? Are there full-stack TS apps that the TS team owns that ownership can be spread around more? Will the TS team do more rotations onto other teams at MSFT?

  • pjc50 4 days ago

    Ultimately the solution has to be breaking the browser monopoly on JS, via performance parity of WASM or some other route, so that developers can dogfood in performant languages instead across all their tooling, front end, and back end.

    • austin-cheney 4 days ago

      First, this thread and article have nothing to do with language and/or application execution performance. It is only about the tsc compiler execution time.

      Second, JavaScript already executes quickly. Aside from arithmetic operations it has now reached performance parity to Java and highly optimized JavaScript (typed arrays and an understanding of data access from arrays and objects in memory) can come within 1.5x execution speed of C++. At this point all the slowness of JavaScript is related to things other than code execution, such as: garbage collection, unnecessary framework code bloat, and poorly written code.

      That being said it isn't realistic to expect measurably significant faster execution times by replacing JavaScript with a WASM runtime. This is more true after considering that many performance problems with JavaScript in the wild are human problems more than technology problems.

      Third, WASM has nothing to do with JavaScript, according to its originators and maintainers. WASM was never created to compete, replace, modify, or influence JavaScript. WASM was created as a language ubiquitous Flash replacement in a sandbox. Since WASM executes in an agnostic sandbox the cost to replace an existing runtime is high since an existing run time is already available but a WASM runtime is more akin to installing a desktop application for first time run.

      • sebzim4500 4 days ago

        How do you reconcile this view with the fact that the typescript team rewrote the compiler in Go and it got 10x faster? Do you think that they could have kept in in typescript and achieved similar performance but they didn't for some reason?

        • auxiliarymoose 4 days ago

          This was touched on in the video a little bit—essentially, the TypeScript codebase has a lot of polymorphic function calls, and so is generally hard to JIT optimize. JS to Go therefore yielded a direct ~3.5x improvement.

          The rest of the 10x comes from multi-threading, which wasn't possible to do in a simple way in the JS compiler (efficient multithreading while writing idiomatic code is hard in JS).

          JavaScript is very fast for single-threaded programs with monomorphic functions, but in the TypeScript compiler's case, the polymorphic functions and opportunity for parallelization mean that Go is substantially faster while keeping the same overall program structure.

        • austin-cheney 4 days ago

          I have no idea about the details of their test cases. If they had used an even faster language like Cobol or Fortran maybe they could have gotten it 1,000,000x faster.

          What I do know is that some people complain about long compile times in their code that can last up to 10 minutes. I had a personal application that was greater than 60k lines of code and the tsc compiler would compile it in about 13 seconds on my super old computer. SWC would compile it in about 2.5 seconds. This tells me the far greater opportunity for performance improvement is not in modifying the compiler but in modifying the application instance.

          • zombot 4 days ago

            > maybe they could have gotten it 1,000,000x faster.

            WTF.

            • flykespice 3 days ago

              Yeah this is an overly exaggerated claim

              • zombot 3 days ago

                It was unwarranted sarcastic snark. That commenter was bitten by some bug.

      • mmcnl 4 days ago

        Very short, succinct and informative comment. Thank you.

    • bloomingkales 4 days ago

      Are you looking for non-browser performance such as 3d? I see no case that another language is going to bring performance to the DOM. You'd have to be rendering straight to canvas/webgl for me to believe any of this.

  • jillyboel 4 days ago

    The issue with Flow is that it's slow, flaky and has shifted the entire paradigm multiple times making version upgrades nearly impossible without also updating your dependencies, IF your dependencies adopted the new flow version as well. Otherwise you're SOL.

    As a result the amount of libraries that ship flow types has absolutely dwindled over the years, and now typescript has completely taken over.

    • matclayton 4 days ago

      Our experience is the opposite, we have a pretty large flow typed code base, and can do a full check in <100ms. When we converted to TS (decided not to merged) we saw typescript was in the multiple minute mark. It’s worth checking out LTI and how the typing on boundaries, enables flow to parallelize and give very precise error messages compared to TS. The third party lib support is however basically dead, except the latest versions of flow are starting to enable ingestion of TS types, so that’s interesting.

  • axkdev 4 days ago

    They should write a typescript-to-go transpiler (in typescript) , so that they can write their compiler in typescript and use typescript to transpile it to go.

zoogeny 4 days ago

I notice this time and time again: projects start with a flexible scripting language and a promise that the performance will be sufficient. I mean, JS is pretty performant as scripting languages go and it is hard to think of any language runtimes that get more attention than the browser VMs. And generally, 90% of the things people do will run sufficiently fast in that VM.

Yet projects inevitably get to the stage where a more native representation wins out. I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.

It makes me think I should be starting any project I have in the lowest level representation that allows me some ergonomics. Maybe more reason to lean into Zig? I don't mean for places where something like Rust would be appropriate. I mean for anything I would consider using a "good enough" scripting language.

It honestly has me questioning my default assumption to use JS runtimes on the server (e.g. Node, deno, bun). I mean, the benefit of using the same code on the server/client has rarely if ever been a significant contributor to project maintainability for me. And it isn't that hard these days to spin up a web server with simple routing, database connectivity, etc. in pretty much any language including Zig or Go. And with LLMs and language servers, there is decreasing utility in familiarity with a language to be productive.

It feels like the advantages of scripting languages are being eroded away. If I am planning a career "vibe coding" or prompt engineering my way into the future, I wonder how reasonable it would be to assume I'll be doing it to generate lower level code rather than scripts.

  • throwitaway1123 4 days ago

    > I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.

    Prisma is currently being rewritten from Rust to TypeScript: https://www.prisma.io/blog/rust-to-typescript-update-boostin...

    > Yet projects inevitably get to the stage where a more native representation wins out.

    I would be careful about extrapolating the performance gains achieved by the Go TypeScript port to non-compiler use cases. A compiler is perhaps the worst use case for a language like JS, because it is both (as Anders Hejlsberg refers to it) an "embarassingly parallel task" (because each source file can be parsed independently), but also requires the results of the parsing step to be aggregated and shared across multiple threads (which requires shared memory multithreading of AST objects). Over half of the performance gains can be attributed to being able to spin up a separate goroutine to parse each source file. Anders explains it perfectly here: https://www.youtube.com/watch?v=ZlGza4oIleY&t=2027s

    We might eventually get shared memory multithreading (beyond Array Buffers) in JS via the Structs proposal [1], but that remains to be seen.

    [1] https://github.com/tc39/proposal-structs?tab=readme-ov-file

    • zoogeny 4 days ago

      I think the Prisma case is a bit of a red herring. First, they are using WASM which itself is a a low-level representation. Second, the performance gains appear primarily in avoiding the marshalling of data from JavaScript into Rust (and back again I presume). Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.

      As for the "compilers are special" reasoning, I don't ascribe to it. I suppose because it implies the opposite: something (other than a compiler) is especially suited to run well in a scripting language. But the former doesn't imply the later in reality and so the case should be made independently. The Prisma case is one: you are already dealing with JavaScript objects so it is wise to stay in JavaScript. The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.

      • throwitaway1123 4 days ago

        > First, they are using WASM which itself is a a low-level representation.

        WASM is used to generate the query plan, but query execution now happens entirely within TypeScript, whereas under the previous architecture both steps were handled by Rust. So in a very literal sense some of the Rust code is being rewritten in TypeScript.

        > Basically, if the majority of your application is already in JavaScript and expects primarily to interact with other code written in JavaScript, it usually doesn't make sense to serialize your data, pass it to another runtime for some processing, then pass the result back.

        My point was simply to refute the assertion that once software is written in a low level language, it will never be converted to a higher level language, as if low level languages are necessarily the terminal state for all software, which is what your original comment seemed to be suggesting. This feels like a bit of a "No true Scotsman" argument: https://en.wikipedia.org/wiki/No_true_Scotsman

        > As for the "compilers are special" reasoning, I don't ascribe to it.

        Compilers (and more specifically lexers and parsers) are special in the sense that they're incredibly well suited for languages with shared memory multithreading. Not every workload fits that profile.

        > The old cases I would choose the scripting language (familiarity, speed of adding new features, ability to hire a team quickly) seem to be eroding in the face of LLMs.

        I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits still apply. In fact, one of the stated reasons for the Prisma rewrite was "skillset barriers". "Contributing to the query engine requires a combination of Rust and TypeScript proficiency, reducing the opportunity for community involvement." [1]

        [1] https://www.prisma.io/blog/from-rust-to-typescript-a-new-cha...

        • zoogeny 4 days ago

          I'm not denying the facts of the matter, I am denying the conclusion. The circumstances of the situation are relevant. Marshalling cost across IPC boundaries come into play in every single possible situation regardless of language. It is why shared memory architectures exist. It doesn't matter what language is on the other side of the IPC, if the performance gained by using a separate process is not greater than the cost of the communication then you should avoid the IPC. One way to avoid that cost is to share the memory. In the case of code already running in a JavaScript VM a very easy way to share the memory means you do the processing in JavaScript.

          That is why I am saying your evidence is a red herring. It is a case where a reasonable decision was made to rewrite in JavaScript/TypeScript but it has nothing to do with the merits of the language and everything to do with the environment that the entire system is running in. They even state the Rust code is fast (and undoubtedly faster than the JS version), just not fast enough to justify the IPC cost.

          And it in no way applies to the point I am making, where I explicitly question "starting a new project" for example "my default assumption to use JS runtimes on the server". It's closer to a "Well, actually ..." than an attempt to clarify or provide a reasoned response.

          The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions. And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.

          • throwitaway1123 4 days ago

            Rather than fixating on this single Prisma example, I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages.

            First of all, I would argue that software rewrites are a bad proxy metric for language quality in general. Language rewrites don't measure languages purely on a qualitative scale, but rather on a scale of how likely they are to be misused in the wrong problem domain.

            Low level languages tend to have a higher barrier to entry, which as a result means they're less likely to be chosen on a whim during the first iteration of a project. This phenomenon is exhibited not just at the macroscopic level of language choice, but often times when determining which data structures and techniques to use within a specific language. I've very seldomly found myself accidentally reaching for a Uint8Array or a WeakRef in JS when a normal array or reference would suffice, and then having to rewrite my code, not because those solutions are superior, but because they're so much less ergonomic that I'm only likely to use them when I'm relatively certain they're required.

            This results in obvious selection bias. If you were to survey JS developers and ask how often they've rewritten a normal reference in favor of a WeakRef vs the opposite migration, the results would be skewed because the cost of dereferencing WeakRefs is high enough that you're unlikely to use them hastily. The same is true to a certain extent in regards to language choice. Developers are less likely to spend time appeasing Rust's borrow checker when PHP/Ruby/JS would suffice, so if a scripting language is the best choice for the problem at hand, they're less likely to get it wrong during the first iteration and have to suffer through a massive rewrite (and then post about it on HN). I've seen plenty of examples of competent software developers saying they'd choose a scripting language in lieu of Go/Rust/Zig. Here's the founder of Hashicorp (who built his company on Go, and who's currently building a terminal in Zig), saying he'd choose PHP or Rails for a web server in 2025: https://www.youtube.com/watch?v=YQnz7L6x068&t=1821s

            • zoogeny 4 days ago

              > your larger point which seems to be that all greenfield projects are necessarily best suited to low level language

              That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements. When I say "it makes me think I should ..." I am not saying: "Everyone everywhere should always under any circumstances ...". It is a call to question the assumption, not to make emphatic universal decisions on any possible project that could ever be conceived. That would be a bad faith interpretation of my post. If that is what you are arguing against, consider if you really believe that is what I meant.

              So my point stands: I am going to consider this more deeply rather than default assuming that an interpreted scripting language is suitable.

              > Low level languages tend to have a higher barrier to entry,

              I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.

              • throwitaway1123 4 days ago

                > That is not my intention. Perhaps you are reading absolutes and chasing after black and white statements.

                The first comment I wrote in this thread was a response to the following quote: "Yet projects inevitably get to the stage where a more native representation wins out." Inevitable means impossible to evade. That's about as close to a black and white statement as possible. You're also completely ignoring the substance of my argument and focusing on the wording. My point is that language rewrites (like the TS rewrite that sparked this discussion) are a faulty indicator of scripting language quality.

                > I almost think you aren't reading my post at this point and are just arguing with a strawman you invented in your head. But I am assuming good faith on your part here, so once again I'll just repeat myself again and again: LLMs have already changed the barrier to entry for low-level languages and they will continue to do so.

                And I've already said that I disagree with this assertion. I'll just quote myself in case you haven't read through all my comments: "I'm not an AI pessimist, but I'm also not an AI maximalist who is convinced that AI will completely eliminate the need for human code authoring and review, and as long as humans are required to write and review code, then those benefits [of scripting languages] still apply." I was under the impression that I didn't have to keep restating my position.

                I don't believe that AI has eroded the barriers of entry to the point where the average Ruby or PHP developer will enjoy passing around memory allocators in Zig while writing API endpoints. Neither of us can be 100% certain about what the future holds for AI, but as someone else pointed out, making technical decisions in the present based on AI speculation is a gamble.

                • zoogeny 4 days ago

                  Ah, now we're at the dictionary definition level. So let's check Google:

                      Inevitable:
                            as is certain to happen; unavoidably.
                         informal
                            as one would expect; predictably.
                            "inevitably, the phone started to ring just as we sat down"
                  
                  Which interpretation of the word is "good faith" considering the rest of my post? If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement? Would you argue with Google and say "I have sat down before and the phone didn't ring"?

                  It is Hacker News policy and just good internet etiquette to argue with good faith in mind. I find it hard to believe you could have read my entire post and come away with the belief of absolutism.

                  edit: Just to add to this, your interpretation assumes I think Django (the Python web application framework) will unavoidably be rewritten in a lower level language. And Ruby on Rails will unavoidably be rewritten. Do you believe that is what I was saying? Do you believe that I actually believe that?

                  • throwitaway1123 4 days ago

                    I wrote 362 words on why language rewrites are a faulty indicator of language quality with multiple examples and anecdotes, and you hyper-fixated on the very first sentence of my comment, instead of addressing the substance of my claim. In what alternate universe is that a good faith argument? If you were truly arguing in good faith you'd restate your position in whichever way you'd like your argument represented, and then proceed to respond to something besides the first sentence. Regardless of how strongly or weakly you believe that "native representations win out", my argument about misusing language rewrite anecdata still stands, and it would have been far more productive to respond to that point.

                    > If I said "If you drink and drive you will inevitably get into an accident" - would you argue against that statement?

                    If we were having a discussion about automobile safety and you wrote several hundred words about why a specific type of accident isn't indicative of a larger trend, I wouldn't respond by cherry picking the first sentence of your comment, and quoting Google definitions about a phone ringing.

                    • zoogeny 4 days ago

                      You said: "Inevitable means impossible to evade. That's about as close to a black and white statement as possible."

                      I used Google to point out that your argument, which hinged on your definition of what the word "inevitable" means is the narrowest possible interpretation of my statement. An interpretation so narrow that it indicates you are arguing in bad faith, which I believe to be the case. You are accusing me of making an argument that I did not make by accusing me of not understanding what a word means. You are wrong on both accounts as demonstrated.

                      The only person thinking in black in white is the figment of me in your imagination. I've re-read the argument chain and I'm happy leaving my point where it is. I don't think your points, starting with your attempt at a counter example with Prisma, nor your exceptional compiler argument, nor any of the other points you have tried support your case.

                      • throwitaway1123 4 days ago

                        > which hinged on your definition of what the word "inevitable" means is the narrowest possible interpretation of my statement.

                        My argument does not hinge upon the definition of the word inevitable. You originally said "I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language."

                        I gave a relatively thorough accounting of why you've observed this, and why it doesn't indicate what you believe it to indicate here: https://news.ycombinator.com/item?id=43339297

                        Instead of addressing the substance of the argument you focused on this introductory sentence: "I'd like to address your larger point which seems to be that all greenfield projects are necessarily best suited to low level languages."

                        Regardless of how narrowly or widely you want me to interpret your stance, my point is that the data you're using to form your opinion (rewrites from higher to lower level languages) does not support any variation of your argument. You "can't think of a time a high profile project written in a lower level representation got ported to a higher level language" because developers tend to be more hesitant about reaching for lower level languages (due to the higher barrier to entry), and therefore are less likely to misuse them in the wrong problem domain.

          • computably 4 days ago

            > The world is changing before our eyes. The coding LLMs we have already are good but the ones in the pipeline are better. The ones coming next year are likely to be even better. It is time to revisit our long held opinions.

            Making technical decisions based on hypothetical technologies that may solve your problems in "a year or so" is a gamble.

            > And in the case of "reads data from a OS socket/file-descriptor and writes data to a OS socket/file-descriptor", which is the case for a significant number of applications including web servers, I'm starting to doubt that choosing a scripting language for that task, as I once advocated, is a good plan given what I am seeing.

            Arguably Go is a scripting language designed for exactly that purpose.

            • zoogeny 4 days ago

              I wouldn't think choosing a native language over a scripting language is a "gamble" but I suppose that all depends on ability and risk tolerance. I think it would be relatively easy to develop using Rust, Go, Zig, etc.

              I would not call Go a scripting language. Go programs are statically linked single binaries, not a textual representation that is loaded into an interpreter or VM. It has more in common with C than Bash. But to make sure we are clear (in case you want to dig in on calling Go a scripting language) I am talking about dynamic programming languages like Python, Ruby, JavaScript, PHP, Perl, etc. which generally do not compile to static binaries and instead load text files into an interpreter/VM. These dynamic scripted languages tend to have performance below static binaries (like Go, Rust, C/C++) and usually below byte code interpreted languages (like C# and Java).

          • anileated 4 days ago

            The fact that many software products are moving to lower-level languages is not a general point in favour of lower-level languages being somehow better—rather, it simply aligns with general directions of software evolution.

            1. As products mature, they may find useful scenarios involving runtime environments that don’t necessarily match the ones that were in mind back when the foundation was laid. If relevant parts are rewritten in a lower-level language like C or Rust, it becomes possible to reuse them across environments (in embedded land, in Web via WASM, etc.) without duplicate implementations while mostly preserving or even improving performance and unlocking new use cases and interesting integrations.

            2. As products mature, they may find use cases that have drastically different performance requirements. TypeScript was not used for truly massive codebases, until it was, and then performance became a big issue.

            Starting a product trying to get all of the above from the get go is rarely a good idea: a product that rots and has little adoption due to feature creep and lack of focus (with resulting bugs and/or slow progress) doesn’t stand a chance against a product that runs slower and in fewer environments but, crucially, 1) is released, 2) makes sound design decisions, and 3) functions sufficiently well for the purposes of its audience.

            Whether LLMs are involved or not makes no meaningful difference: no matter how good your autocomplete is, other things equal the second instance still wins over the first—it still takes less time to reach the usefulness threshold and start gaining adoption. (And if you are making a religious argument about omniscient entities for which there is no meaningful difference between those two cases, which can instantly develop a bug-free product with infinite flexibility and perfect performance at whatever the level of abstraction required, coming any year, then you should double-check whether if they do arrive anyone would still be using them for this purpose. In a world where I, a hypothetical end user, can get X instantly conjured for me out of thin air by a genie, you, a hypothetical software developer, better have that genie conjure you some money lest your family goes hungry.)

            • zoogeny 4 days ago

              I'm not here to predict the future, rather to reconsider old assumptions based on new evidence.

              Of course, LLMs may stay as "autocomplete" forever. Or for decades. But my intuition is telling me that in the next 2-3 years they are going to increase in capability, especially for coding, at a pace greater than the last 2 years. The evidence that I have (by actually using them) seems to point in that direction.

              I'm perfectly capable of writing programs in Perl, Python, JavaScript, C++, PHP, Java. Each of those languages (and more actually) I have used professionally in the past. I am confident I could write a perfectly good app in Go, Rust, Elixir, C, Ruby, Swift, Scala, etc.

              If you asked me 6 months ago "what would you choose to write a basic CRUD web app" I probably would have said TypeScript. What I am questioning now is: why? What would lead me to choose TypeScript? Do the reasons I would have chosen TypeScript continue to make sense today?

              There are no genies here, only questioning of assumptions. And my new assumptions include the assumption that any coding I would do will involve a code assisting LLM. That opens up new possibilities for me. Given LLM assistance, why wouldn't I write my web app layer in Rust or Zig?

              Your assumptions about the present and near future will guide your own decisions. If you don't share the same intuitions you will come to different conclusions.

              • anileated 3 days ago

                > Given LLM assistance, why wouldn't I write my web app layer in Rust or Zig?

                Same reasons as with no LLM assistance. You would be choosing higher maintenance burden and slower development speed compared to your competitors, though. They will get it out faster, they will have fewer issues, and will be able to find people to support it more easily. Your product may run faster, but theirs will work and be out faster.

                • zoogeny 3 days ago

                  Lets imagine we are assembly programmers. You have a particular style of assembly that you believe gives you some advantage over your competitors. The way you structure your assembly gives you a lower maintenance burden and faster development speed compared to your competitors.

                  I show up and say "I have a C compiler". Does it matter at that point how good your assembly is? All of a sudden I can generate 10x the amount of assembly that you generate. And you are probably aghast, what crappy assembly my C compiler generates.

                  Now ask yourself: how often do you look at generated assembly?

                  Compilers don't care about writing maintainable assembly. They are a tool to generate assembly in high volumes. History has shown that people who use C compilers were able to get products to market faster compared to people who wrote using assembly.

                  So lets assume, for the sake of understanding my position, that LLMs will be like the compiler. I give it some high-level English description of the code I want it to run and it generates a high volume of [programming language] as its output. My argument is, the programming language that it outputs is important and it would be better for it to output into a language that low level native binaries. In the same way I don't care about "maintainable assembly" coming out of a C compiler, I don't care about maintainable Python coming out of my LLM.

                  • anileated 2 days ago

                    Again, your competitor will get there faster and with fewer bugs. LLMs are trained on human input and humans do not do great at low level languages. They churn out better Python than C and especially when it comes to refactoring it (have observed that personally).

                  • Vaguely2178 3 days ago

                    > In the same way I don't care about "maintainable assembly" coming out of a C compiler, I don't care about maintainable Python coming out of my LLM.

                    A well tested compiler is far more deterministic than an LLM, and can be largely treated as a black box because it won't randomly hallucinate output.

                    • zoogeny 3 days ago

                      Humans aren't deterministic. I've trusted junior engineers to ship code. I fail to see a significant difference here in the long term.

                      We have engineering practices that guard against humans making mistakes that break builds or production environments. It isn't like we are going to discard those practices. In fact, we'll double down on them. I would subject an LLM to the level of strict validation that any human engineer would fine suffocating.

                      The reason we trust compilers as a black box is because we have created systems that allow us to do so. There is no reason I can see currently that we will be unable to do so for LLM output.

                      I might be wrong, time will tell. We're going to find out because some will try. And if it turns out to be as effective as C was compared to assembly then I want to be on that side of history as early as possible.

                      • Vaguely2178 3 days ago

                        > Humans aren't deterministic.

                        Exactly, which is why I would want humans and LLMs to write maintainable code, so that I can review and maintain it, which brings us back to the original question of which programming languages are the easiest to maintain...

                        • zoogeny 3 days ago

                          Well, we're in a loop then because my response was "you don't care about maintainable assembly".

                          I want maintainable systems you want maintainable code. We can just accept that difference. I believe maintainable systems can be achieved without focusing on code that humans find maintainable. In the future, I believe we will build systems on top of code primarily written by LLMs and the rubric of what constitutes good code will change accordingly.

                          edit: I would also add that your position is exactly the position of assembly programmers when C came around. They lamented the assembly the C compiler generated. "I want assembly I can read, understand and maintain" they demanded. They didn't get it.

                          • Vaguely2178 3 days ago

                            We're stuck in a loop because you're flip flopping between two positions.

                            You started off by comparing LLM output to compiler output, which I pointed out is a false equivalence because LLMs aren't as deterministic as compilers.

                            Then you switched to comparing LLMs to humans, which I'm fine with, but then LLMs must be expected to produce maintainable code just like humans.

                            Now you're going back to the original premise that LLM output is comparable to compiler output, thus completing the loop.

                            • zoogeny 3 days ago

                              There are more elements to a compiler than determinism. That is, determinism isn't their sole defining property. I can compare other properties of compilers to LLMs. No "flip flop" there IMO, but your judgment may vary.

                              Perhaps it is impossible for you to imagine that LLMs can share some properties with compilers and other properties with humans? And that this specific blend of properties makes them unique? And that uniqueness means we have to take a nuanced approach to understanding their impact on designing and building systems?

                              So lets lay it out. LLMs are like compilers in that they take high level instructions (in the form of English) and translate it into programming languages. Maybe "transpiler" would be a word you prefer? LLMs are like humans in that this translation of high level instructions to programming languages is non-deterministic and so it requires system level controls to handle this imprecision.

                              I do not detect any conflict in these two ideas but perhaps you see things differently.

                              • Vaguely2178 3 days ago

                                > There are more elements to a compiler than determinism.

                                Yes, but determinism is the factor that allows me to treat compilers as a black box without verifying their output. LLMs do not share this specific property, which is why I have to verify their output, and easily verifiable software is what I call "maintainable".

                                • zoogeny 3 days ago

                                  An interesting question you might want to ask yourself, related to this idea: what would you do if your compiler wasn't deterministic?

                                  Would you go back to writing assembly? Would you diligently work to make the compiler "more" deterministic. Would you engineer your systems around potential failures?

                                  How do industries like the medical or aviation deal with imperfect humans? Are there lessons we can learn from those domains that may apply to writing code with non-deterministic LLMs?

                                  I also just want to point out an irony here. I'm arguing in favor of languages like Go, Rust and Zig over the more traditional dynamic scripting languages like Python, PHP, Ruby and JavaScript. I almost can't believe I'm fighting the "unmaintainable" angle here. Do people really think a web server written in Go or Rust is unmaintainable? I'm defending my position as if they are, but come on. This is all a bit ridiculous.

                                  • anileated 2 days ago

                                    > Do people really think a web server written in Go or Rust is unmaintainable?

                                    Things are not black and white. It will be less maintainable relatively speaking, proper tool for the job and all that. That’s why you will be left in the dust.

                                  • Vaguely2178 3 days ago

                                    > How do industries like the medical or aviation deal with imperfect humans?

                                    We have a system in science for verifying shoddy human output, it's called peer review. And it's easier for your peers to review your code when it's maintainable. We're back in the loop.

                                    • zoogeny 3 days ago

                                      That is one system. Are there zero others?

                                      Funny thing about this thread and black and white thinking. I feel a different kind of loop.

    • Jean-Papoulos 4 days ago

      >The main driver behind this project is that while Rust is very quick, the cost of serializing data between Rust and TypeScript is very high.

      This sounds more like a "we're kinda stuck with Javascript here" situation. The team is making a compromise, can't have your cake and eat it too I guess.

    • _bin_ 4 days ago

      i don't think this speaks to the general reasons someone would rewrite a mid- or low-level project in a high-level language, so much as to the special treatment JS/TS get. yes, your data model being the default supported, and everything else in the world having to serialize/deserialize to accommodate that, slows performance. in other words, this is just a reason to use the natively-supported JS/TS, still very much the favorite children of browser engines, over the still sort of hacked-in Rust.

  • sapiogram 4 days ago

    > I mean, I can't think of a time a high profile project written in a lower level representation got ported to a higher level language.

    Software never gets rewritten in a higher level language, but software is constantly replaced by alternatives. First example that comes to mind is Discord, an Electron app that immediately and permanently killed every other voice client on the market when it launched.

    • zoogeny 4 days ago

      Yes, scripting replacements often usurp existing ossified alternatives. And there is some truth that a higher level language gave some leverage to the developers. That is why I mentioned the advent of LLM based coding assistants and how this may level the playing field.

      If we assume that coding assistants continue to improve as they have been and we also assume that they are able to generate lower level code on par with higher level code, then it seems the leverage shifts away from "easy to implement features" languages to "fast in most contexts" languages.

      Only time will tell, of course. But I wonder if we will see a new wave of replacements from Electron based apps to LLM assisted native apps.

    • wrs 4 days ago

      It’s a little more nuanced though — I doubt the audio processing in Discord is written in JavaScript. (But I haven’t looked!)

      • timeon 4 days ago

        Isn't most of Discord backend Rust and Go?

        • JustSkyfall 3 days ago

          They mostly use Elixir on the backend, with Rust for a few services like Go Live. IIRC they don’t use Go anymore

        • wrs 2 days ago

          Given the mention of Electron, I was talking about the client side.

    • timeon 4 days ago

      Sure but that comment was mostly about backend. If Discord used js/ts for backend they wouldn't replace anyone.

    • colonelspace 4 days ago

      > Discord ... immediately and permanently killed every other voice client on the market

      Do you mean voice clients like FaceTime, Zoom, Teams, and Slack?

      • ZeWaka 4 days ago

        They're talking about TeamSpeak, Vent, Mumble, and Skype.

      • anon7000 4 days ago

        Well, voice clients on PC for casual gamer use :p

    • xandrius 4 days ago

      I don't think the success of Discord is due to it being written in Electron. Or is it?

      • jonathanlydall 4 days ago

        I game very little these days, but have run mumble, ventrillo and teamspeak in the past and the problem was always the friction in onboarding people onto them, you’d have to exchange host, port, password at best, or worse, explain how to download, install and use.

        Discord can run from a browser, making onboarding super easy. The installable app being in Electron makes for minimal (if any) difference between it and the website.

        In summary, running in the web browser helps a lot, and Electron makes it very easy for them to keep the browser version first class.

        As an added bonus, they can support Linux, Windows and macOS equally well.

        I would say it helps as without Electron, serving all the above with equal feature parity just would have been too expensive or slow and perhaps it just wouldn’t have been as frictionless for all types of new users like it is.

      • Cthulhu_ 4 days ago

        I do believe it's a factor; web apps have the most freedom when it comes to visual design (and the vast majority of developers for it), and it makes it a crossplatform application. Electron and web applications are the market leader in front-end / client side applications by a very wide margin. I mean I don't like it but that's how it is.

  • melbourne_mat 4 days ago

    I think it's smart to start with a high level language which should reduce development time, prove the worth of the application, then switch to a lower level language later.

    What was that saying again? Premature optimisation is the root of all evil

    • vacuity 4 days ago

      https://news.ycombinator.com/item?id=29228427

      A thread going into what Knuth meant by that quote that is usually shortened to "premature optimization is the root of all evil". Or, to rephrase it: don't tire yourself out climbing for the high fruit, but do not ignore the low-hanging fruit. But really I don't even see why "scripting languages" are the particular "high level" languages of choice. Compilers nowadays are good. No one is asking you to drop down to C or C++.

      • Cthulhu_ 4 days ago

        I think (personal opinion / belief) Go hits that sweet spot; it's an "easy" language, but compiles down to a binary instead of e.g. Java, C#, JS etc.

        Mind you I'm sure there were similar attempts at a language with those goals, but they didn't have the backing of Google.

    • Byamarro 3 days ago

      I think that early in development you should be able to spam a lot of hypothesis and quickly test them and check how people interact with your software. Whether your software makes sense is more important than whether it's fast.

      People are also highly unpredictable, so it is usually a matter of trial and error, very often their feedback may completely erase wide sets of assumptions you were building your product around.

      It's borderline impossible to do it on mature product, but rewriting mature product to something faster is not borderline impossible - it's just very hard.

      Note that it doesn't apply if you just program something in accordance from an rfc where everything is predefined.

    • jerf 3 days ago

      I think a lot of people are running on facts that are between 10 to 25 years out of date. There was a time when the scripting languages had a very, very large step up in prototyping capability, because the static languages of the time were frankly terrible.

      But the static languages have changed, a lot, for the better since then. I now find that when I'm greenfielding something, if I have even a clue how I want to structure it overall, that static languages end up being faster somewhere around a week into the development process. Dynamic languages are superficially easier to refactor, but the refactorings tend to take the form of creating functions that take more and more possible inputs and this corrodes the design over time. Static programs stay working the whole time, and I can easily transform the entire program to take some parameter differently or something and get assurance I'm not missing a code path.

      I personally actively avoid dynamic languages for initial development now, for anything that is going to be over a week in size. The false economies are already biting by that point and it gets progressively and generally monotonically worse over time.

      This comes from someone who was almost 100% dynamic scripting language in the first 15 years of my career. It's not from lack of understanding of dynamic scripting languages, used at scale.

      • zoogeny 3 days ago

        > static languages end up being faster somewhere around a week into the development process

        And when you factor in LLMs being ridiculously good at scaffolding basic apps, the time to reach that turning point will continue to decrease. It takes me time to write out test harness boiler plate, or making a nice dev/staging environment configuration. It is why many languages come with a `mylang create proj` command line tool to automate a basic project. But the custom scaffolding that a LLM can provide will eventually beat any command line project creation tool we can imagine.

        This is one of the driving realizations of my point. I've coded in a lot of dynamic languages and a lot of static languages and the distance between their developer experiences are shrinking drastically. I would expect a decent dynamic language expert to become productive in Go very quickly. Rust may be more difficult but again should be totally possible for any competent programmer. Then you add on top of that the fact they will be ramping up using an LLM that can explain the code they are looking at to them, that can provide suggestions on how to approach problems, that can actually write example code, etc.

        And then there are all of the benefits of deploying statically compile binaries. Of managing memory layouts precisely. Of taking direct advantage of things like simd when appropriate.

  • timewizard 4 days ago

    > the lowest level representation that allows me some ergonomics

    The ergonomics of compiling your code for every combination of architecture and platform you plan to deploy to? It's not fun. I promise.

    > my default assumption to use JS runtimes on the server

    AWS Lambda has a minimum billing interval of 1ms. To do anything interesting you have to call other APIs which usually have a minimum latency of 5 to 30ms. You aren't buying much of anything in any scalable environment.

    > there is decreasing utility in familiarity with a language to be productive.

    I hope you aren't planning on making money from this code. Either way, have fun debugging that!

    > the advantages of scripting languages are being eroded away.

    As long as scripting languages have interfaces which let them access C libraries either directly or through compiled modules they will have strong advantages. Just having a CLI where you can test out ideas and check performance is massively powerful and I hate not having it in any compiled project. Go has particularly bad ergonomics here as writing test cases are easy but exploring ideas is not due to it's strictness down to even the code styling level.

    • zoogeny 4 days ago

      > compiling your code for every combination of architecture and platform you plan to deploy to

      I mean, on the one hand you are arguing for C FFI and on the other worrying about compiling for every combination of architecture. Those positions seem to be contradictory. Although I guess you're assuming that other people who write the C libraries for you did that work. I guess you better hope libraries exist for every possible performance issue you come across in your cross platform scripting library.

      And why limit your runtime to AWS Lambda? That is a constraint you are placing on yourself. Nowadays with docker you can have pretty much any Linux you want as an image. But why not just implement on top of cgroups from scratch? I guess we live a world where that is unthinkable to many. Probably just better to pay AWS. But if you do use docker, all of a sudden worrying about compiling for all of those architectures seems like less of an issue. And you can use ECS, so you can still pay AWS!

      As for tooling issues, and there are definitely tooling issues with every language, it is a pick your poison kind of thing. I remember really liking Pascal tooling way back in the day. Smalltalk images have some nifty features. Who doesn't like Lisp, the language that taught us all REPL. Not sure I'd choose them for a project today though.

      As LLMs get better, I just assume what constitutes "developer experience" is going to change. Will I even care about how unergonomic writing test cases in Go can be if I can just say "LLM, write a test that covers X, Y, Z case". As long as I can read the resultant output and verify it meets my expectations, I don't care how many characters of code or boilerplate that will force the LLM to generate.

      edit: I misread your point about Go test cases but I'll leave my mistake standing. My overall point was the stuff I find annoying to do myself I can just farm out to the LLM. If the cost of writing an experiment is "LLM, give this a try" and if it works great and if not `git checkout`, then I will be ok with something less optimal.

  • Tadpole9181 4 days ago

    The JS `tsc` type checks the entire 1.5 million line VS Code source in 77s (non-incremental). 7s is a lot better and will certainly imrpove DX - which is their goal - but I don't see how that's "insufficient".

    The trade-off is that the team will have to start dealing with a lot of separate issues... How do tools like ESLint TS talk to TSC now? How to run this in playground? How to distribute the binaries? And they also lose out on the TS type system, which makes their Go version rely a little more on developer prowess.

    This is an easy choice for one of the most fundamental tools underlaying a whole ecosystem, maintained by Microsoft and one of the developers of C# itself, full-time.

    Other businesses probably want to focus on actually making money by leading their domain and easing long-term maintenance.

  • pjmlp 3 days ago

    AI tooling is the new wave of Assembly => Compilers transition.

    It won't happen tomorrow, but I am quite certain eventually it will be in a position where executables are generated directly, and we will enter into a new computation model.

    Like you can nowadays still inspect the generated Assembly, and eventually fine tune it, our AI tools of the future might provide a similar approach.

    • zoogeny 3 days ago

      I suspect this will be the case. And people arguing about dynamic interpreted scripting language vs. static compiled binary language will be like people arguing over flavors of assembly.

      Just like being familiar with assembly is extremely useful in certain circumstances (I spent a lot of time looking at assembly while working in the games industry. In fact, I was part of a small group that found a bug in the MS C++ compiler which we discovered by inspecting the output assembly) it will be extremely useful for programmers to be competent in the low level representations. At least for a good long while (probably years) we'll review almost all of the code generated before shipping it. But it won't be long until we just "trust" the output of the AI tooling.

      And by "trust" I mean we will have engineering practices in place to validate the software before release. Unit testing, integration testing, functional testing, static analysis, etc.

      At some point the volume of code generated by the AIs will be so much that it won't be practical to consider every single line of code. Just like the volume of assembly created by the C compiler is so much that for the most part we just assume it is fine. Only in special cases do we narrowly focus on a hot loop or some other part of the code.

      This might happen slowly, over decades, or quickly over the next two years.

  • bartvk 4 days ago

    > And it isn't that hard these days to spin up a web server with simple routing, database connectivity, etc. in pretty much any language including Zig or Go

    Just two years ago, a friend of mine described it as quite a hassle to get a RESTful backend running in Go. He got it working but it was more work than usual. Was he an outlier or have things been getting better in the framework department?

    • whattidywhat 3 days ago

      Not sure when your friend tried Go but for the last 5-10 years or so go has been really easy to make REST services. It's practically baked into the language but if you want it even easier there are several extremely popular libraries for it.

      Go tends to have more boiler plate than other languages. So more typing work, less thinking work, less maintenance work once completed.

  • ironmagma 4 days ago

    Inevitably? Well, the promise of using something less efficient in terms of performance is that it will be more efficient in terms of development. Many times projects fail because they optimize too early and never built the features they needed to or couldn’t iterate fast enough to prove value and die. So if the native version is better but failed, it’s not so inevitable that it will get to that stage.

    • zoogeny 4 days ago

      Right, which is my point about LLM code assistants. If you did have two cases in the past: native but slow to add features so the project eventually dies vs. scripted but performance is bad enough it eventually needs to be rewritten. (Of course, this is a false dichotomy but I'm playing into your scenario).

      Now we may have a new case: native but fast to add features using a code assist LLM.

      If that new case is a true reflection of the near future (only time will tell) then it makes the case against the scripted solution. If (and only if) you could use a code assist LLM to match the feature efficiency of a scripting language while using a native language, it would seem reasonable to choose that as the starting point.

      • ironmagma 4 days ago

        That’s an interesting idea. It’s amazing how far we’ve come without essentially any objective data on how much these various methodologies (e.g. using a scripting language) improve or worsen development time.

        The adoption of AI Code Assistance I am sure will be driven similarly anecdotally, because who has the time or money to actually measure productivity techniques when you can just build a personal set of superstitions that work for you (personally) and sell it? Or put another way, what manager actually would spend money on basic science?

  • physicsguy 3 days ago

    Starting any project in the lowest level representation might make it take twice as long and stymie adoption though. It's a hard trade off when you start a project.

  • pier25 4 days ago

    > It honestly has me questioning my default assumption to use JS runtimes on the server (e.g. Node, deno, bun).

    The JS runtimes are fine for the majority of use cases but the ecosystem is really the issue IMO.

    > the benefit of using the same code on the server/client has rarely if ever been a significant contributor to project maintainability for me

    I agree and now with OpenAPI this is even less of an argument.

  • mpweiher 3 days ago

    Yes, I think the promises of the "we can JIT all the overhead away"-camp were always overblown, and never materialized in practice. This is another very strong datapoint for that hypothesis[1].

    However, it is important not to conflate "scripting language" and "dynamic language" and "interpreted". While there is some correlation there, it is not a necessary one.

    Objective-C is an example of a fast AOT-compiled pretty dynamic language, and WebScript was an interpreted scripting language with pretty much identical syntax and semantics.[2]

    What do I mean with fast? In my experience, Objective-C can be extremely fast [3], though it can also be used very much like a scripting language and can also be used in ways that are as slow or even slower than popular scripting languages. That range is very interesting.

    So I don't actually think the tradeoff you describe between low-level unergonomic fast and high-level ergonomic slow is a necessary one, and one of the goals of Objective-S is to prove that point.[4]

    So far, it's looking very good. Basically, the richer ways of connecting components appear to allow fairly simple "scripted" connections to achieve reasonably high performance [5]. However, I now have a very simple AOT compiler (no optimizations whatsoever!) and that gives another factor 2.6 [6].

    Steve Sinowsky wrote: "Does developer convenience really trump correctness, scalability, performance, separation of concerns, extensibility, and accidental complexity?"[7].

    I am saying: how about we not have to choose?

    And I'd much rather debug/modify semantically rich, high-level code that my LLM generated.

    [1] https://blog.metaobject.com/2015/10/jitterdammerung.html

    [2] https://blog.metaobject.com/2019/12/the-4-stages-of-objectiv...

    [3] https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...

    [4] https://objective.st

    [5] https://blog.metaobject.com/2021/07/deleting-code-to-double-...

    [6] https://dl.acm.org/doi/10.1145/3689492.3690052

    [7] https://darkcoding.net/research/IEEE-Convenience_Over_Correc...

    • zoogeny 3 days ago

      Yeah, "scripting" language isn't a good word choice on my part. I did mean dynamic/interpreted. But even nowadays that line is blurry with bytecode VMs.

      > And I'd much rather debug/modify semantically rich, high-level code that my LLM generated.

      This I agree with. In fact, we may find that the natural fit for use with LLMs is a language not popular amongst humans. The main issue, in my opinion, is we end up with native code executables, complete control over memory layout and direct access to system calls. Those properties just happen to align with languages like Rust, Go, Zig, C/C++, etc. but they aren't limited to them.

  • winwang 4 days ago

    "A sufficient smart compiler..."

kevlened 4 days ago

For previous attempts at a faster tsc, but in rust, see:

1. https://github.com/dudykr/stc - Abandoned (https://github.com/swc-project/swc/issues/571#issuecomment-1...)

2. https://github.com/kaleidawave/ezno - In active development. Does not have the goal of 1:1 parity to tsc.

  • smarx007 4 days ago

    I think Deno and Bun are the two successful attempts at a faster tsc :)

    • keturakis 4 days ago

      Both Deno and Bun still use current tsc for type checking

    • madjam002 4 days ago

      They just strip types and don’t do any type checking

    • Cthulhu_ 4 days ago

      Those are runtimes primarily, not compilers/type checkers. Likewise, TSC is not a TS runtime.

      • smarx007 3 days ago

        Well, of course. But TSC output (transpiled JS source code) is then run by a JS runtime like Node that has a VM like V8 that makes an internal representation for the JS code. Using Bun or Deno allows you to go to a VM IR from the TypeScript directly without a need for TSC transpilation into JS first.

        But as @keturakis pointed out (thanks!), Deno/Bun still rely on TSC, which I was not aware of.

        • morcus 2 days ago

          Bun doesn't even support a way to check types, just remove them.

          > Note — Similar to other build tools, Bun does not typecheck the files. Use tsc (the official TypeScript CLI) if you're looking to catch static type errors.

pjmlp 4 days ago

Even though I have my considerations regarding Go, I love that they picked Go instead of the fashion to go Rust that seems to be the norm now.

A compiled managed language is much better approach for userspace applications.

Pity that they didn't go with AOT compiled .NET, though.

  • pjc50 4 days ago

    > Pity that they didn't go with AOT compiled .NET, though.

    Yeah. It seems to be unfashionable somewhat even within Microsoft.

    (edit: it seems to be you and me and barely anyone else on HN advocating for C#)

    • nwah1 4 days ago

      Also, this is surprising because this was presented and led by Anders Hejlsberg, who is the creator of both C# and Typescript.

      If anyone should have picked C# it would be him.

      • Cthulhu_ 4 days ago

        At the same time, if anyone can make a language choice, it's him - the fact he didn't pick his own language is high praise for both himself and his neutrality, and the Go language.

        • aws_ls 3 days ago

          Go compiler tooklit works very well on all the OSes. That would be a consideration.

          • nwah1 2 days ago

            So does the .NET compiler. That is not the issue.

      • dagw 4 days ago

        Hejlsberg seemed quite negative when it came to cross platform AOT compiled C# in several comments he's made, hinting at problems with both performance and maturity on certain platforms.

        • zigzag312 4 days ago

          Projects like this are needed to improve C#'s cross platform AOT. Missed opportunity IMO.

          • Cthulhu_ 4 days ago

            Sure, but this team's focus is on Typescript, not C# / cross-platform AOT; there's only so much time in a day. Others can pick it up I'm sure.

            But I think it's also an indication that Typescript may be bigger and more important for Microsoft than C#/.NET is at this time. It's definitely much more used than C# is according to this non-representative survey of Stack Overflow (https://survey.stackoverflow.co/2024/technology).

          • ryanjshaw 4 days ago

            Absolutely. Go is where it is because of the parent org's commitment to dogfooding it; strange that Microsoft is wasting this opportunity AND sending a negative message to .NET devs.

    • atonse 4 days ago

      This was also surprising to me – C# is a really awesome and modern language.

      I happened to be doing a lot of C# and .NET dev when all this transition was happening, and it was very cool to be able to run .NET in Linux. C# is a powerful language with great and constantly evolving ideas in it.

      But then all the stuff between the runtimes, API surfaces, Core vs Framework, etc all got extremely confusing and off-putting. It was necessary to bring all these ecosystems together, but I wonder if that kept people away for a bit? Not sure.

    • Chyzwar 4 days ago

      I think the main thing is that they are porting, not re-writing. Current tsc is functional by nature and that's makes go better fit.

    • pjmlp 4 days ago

      All Azure contributions to CNCF are using a mix of Go and Rust, mostly.

      Here is a kind of weird, given the team.

  • jjice 4 days ago

    If I recall in an article from a while back, the idea was originally rust, but the current compiler design had lots of references shared references that would make the port to rust a lot of work.

    • pjmlp 4 days ago

      Personally, Rust only makes sense in scenarios that automatic memory management of any kind is either unwanted, or it is a quixotic battle making the target group think otherwise.

      OS kernels, firmware, GPGPU,....

      If it is the ML inspired type system, there are plenty of options among compiled managed languages, true Go isn't really on that camp, but whatever.

      • jjice 4 days ago

        I'd love a language that is a GC'd like go, but with the ML inspired type system, and still an imperative language. OCaml seems to be the closest thing to Rust in that regard, but it's not imperative.

        • pjmlp 4 days ago

          OCaml has if/else, for loops, whiles, mutations, what are you missing?

          There are also Swift, F# (Native AOT), Scala (Native, GraalVM, OpenJ9).

        • dontlaugh 4 days ago

          That language is Swift.

        • elcritch 4 days ago

          Nim is pretty close to that for me. It’s more pascal-ish inherited but has a sophisticated type system including case types similar to ML sum types and compile time.

      • cheepin 4 days ago

        Rust memory management is automatic. Object destructors run when the object exits scope without needing explicit management by the programmer

        • pjmlp 4 days ago

          More like compiler assisted management, with compiler errors when the developer doesn't follow the teacher.

          • whattidywhat 3 days ago

            The compiler ensures you are writing memory safe code. Otherwise it rejects that code and helps you see the mistake you made. Why people are so upset when the compiler prevents them from building and shipping unusable code will always baffle me.

            • senorrib 3 days ago

              Theres a huge gap between inefficient and unusable. There’s a lot of usable code out there that leaks memory. I’d argue compilers are hardly pressed by memory usage given the transient nature of their execution.

              • whattidywhat 2 days ago

                Saying rust is unusable is pretty extreme. Tons of serious applications and infrastructure have been using it in production for years generating lots of money and preventing CVEs.

                Leaking memory is sometimes not a huge issue. Missile allocation is real. Undefined behaviour, seg faults, data races, etc from edge cases slow down development.

                The promise of rust isn't that it's super fast to learn but once you have you never deal with a swath of issues ever again.

                And that's speaking from a deficit. Rust is an excellent language to do language development. It has arguably the best tooling for it in the ecosystem in my opinion and a vibrant community for it. Some of the most recent languages have foundations in rust. That is likely to continue going forward.

            • pjmlp 3 days ago

              There is nothing automatic in that.

              • whattidywhat 3 days ago

                Asking a compiler to understand a programmers intent for underspecified mutability is beyond the halting problem.

      • surajrmal 4 days ago

        Or possibly you want to use a language you're familiar with in adjacent spaces (eg tools) or you want to tackle concurrency bugs more directly. There is more to rust than it's

    • dist1ll 4 days ago

      Dealing with references you typically find in a compiler is not a problem for Rust. Arena allocation and indices are your friend.

      • PartiallyTyped 4 days ago

        Flattened asts are also faster than tree/pointer asts.

    • zamalek 4 days ago

      He also mentioned doing a line-for-line port. Assuming you could somehow manage that, you'd probably end up with something slower than JS (not entirely a joke). I'm a rust fanboy, but have to concede that Go was the be t choice here.

      If it was a fresh compiler then the choice would be more difficult.

    • johnmw 4 days ago

      I wonder if this project can easily be integrated into Deno (built mainly in Rust)?

  • rat9988 4 days ago

    >Pity that they didn't go with AOT compiled .NET, though.

    I was trying ot push .net as our possible language for somehow high performance executables. Seeing this means I'll stop trying to advocate for it. If even this team doesn't believe in it.

    • criddell 4 days ago

      That makes sense if your project has similar constraints and requirements.

      I like when Microsoft doesn't pretend that their technologies are the right answer for every problem.

    • madeofpalk 4 days ago

      One unrelated team at Microsoft doesn't 'believe' in .NET is enough to make you change direction?

      • 9rx 4 days ago

        More specifically, the guy who created C# doesn't believe in it (for this particular project).

        But, of course, that is not unusual. There is no language in existence that is best suited to every project out there.

        • Cthulhu_ 4 days ago

          And true wisdom is realising that. I have a lot of respect for this fellow and his decisions.

          • rat9988 2 days ago

            You can have respect for this and his decisions and still think it is doesn't look good for c#.

    • surajrmal 4 days ago

      They cited code style and porting as reasons to use go over c#, not performance.

      • rat9988 4 days ago

        I didn't say it was very performance critical, go and c# are both good enough for us in this regard. The problem is that, when evaluating the whole thing, they decided against c#, that is problematic here.

        • nipah 4 days ago

          But they not stated it is <because> of C#'s performance, so I don't think this is THAT problematic. But I agree that it would be fine to see them dogfeeding on their language for such a massive project, and a project that is even related to TypeScript (as it inspired it in some features), it is a shame they don't do it, but it is also the case for many of their projects (like, they are even pushing react native for apps nowadays), so I think at some level it's really fine.

          • rat9988 4 days ago

            > But they not stated it is <because> of C#'s performance

            But I just said my point is not about performance at all! It is about the whole package. Performance of c# and go are both enough for my usecase, same for java and c obviously. They just told us that they don't think the whole package makes sense, and disowned the AOT compilation.

            • nipah 4 days ago

              But you said: > I was trying ot push .net as our possible language for somehow high performance executables. Seeing this means I'll stop trying to advocate for it. If even this team doesn't believe in it.

              Which made me naturally think your point was, indeed, about performance. Although as it appears to be, I'm wrong, so it's fair enough.

      • rs186 4 days ago

        Also cross platform support

  • bichiliad 4 days ago

    There are some external projects that have tried to port tsc to native. stc[0], for instance, was one. Iirc it started out in Go since it had a more comparable type system (they both use duck typing) making it easier to do one-to-one conversions of code from one language to the other. I’m not totally sure why it ended up pivoting to rust.

    [0]: https://github.com/dudykr/stc

  • ninkendo 4 days ago

    > I love that they picked Go instead of the fashion to go Rust

    This seems super petty to me. Like, if at the end of the day you get a binary that works on your OS and doesn’t require a runtime, why should you “love” that they picked one language over another? It’s exactly the same outcome for you as a user.

    I mean, if you wanted to contribute to the project and you knew go better than rust, that would make sense. But sounds like you just don’t like rust because of… reasons, and you’re just glad to see rust “fail” for their use case.

  • timewizard 4 days ago

    > that seems to be the norm now.

    According to whom?

  • tinco 4 days ago

    It's not just a pity, it's very surprising. In my eyes Go is a direct competitor of C#. Whenever you pick Go for a project, C# should have been a serious consideration. Hejlsberg designed C# and that a team that he's an authority figure in would opt to use Go, a language which frankly I would not consider to build a compiler in is astounding.

    Not saying that in a judgemental way, I'm just genuinely surprised. What does this say about what Hejlsberg thinks of C# at the moment? I would assume one reason they don't pick C# is because it's deeply unpopular in the open source world. If Microsoft was so successful in making Typescript popular for open source work, why can't they do it for C#?

    I have not opted to use C# for anything significant in the past decade or so. I am not 100% sure why, but there's always been something I'd rather use. Whether that's Go, Rust, Ruby or Haskell. I always enjoyed working in C#, I think it's a well designed and powerful language even if it never made the top of my list recently. I never considered that there might be something so fundamentally wrong with it that not even Hejlsberg himself would use it to build a Typescript compiler.

    What's wrong with C#?

    • duckerude 4 days ago

      Anders Hejlsberg explains here: https://youtu.be/10qowKUW82U?t=1154. TL;DW:

      - C# is bytecode-first, Go targets native code. While C# does have AOT capabilities nowadays this is not as mature as Go's and not all platforms support it. Go also has somewhat better control over data layout. They wanted to get as low-level as possible while still having garbage collection.

      - This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style. This suits Go well while a C# port would have required more restructuring.

      • neonsunset 4 days ago

        This is shockingly out-of-date statement by Anders.

        I'm not sure what's going on, I guess he's just not involved with the runtime side of .NET at all to actually know where the capability sits circa 2024/2025. But really, it's a terrible situation to be in. Especially just how worse langdev UX in Go is compared to C#, F# or Rust. No one would've batted an eye if either of those was used.

        • dimgl 4 days ago

          > Especially just how worse langdev UX in Go is compared to C#, F# or Rust.

          Can you explain why the DX in Go is "worse"? I've seen the exact opposite during my professional work.

          • whimsicalism 4 days ago

            the typing situation in Go is a mess, GADTs are generally a joy to work with, nullability is not.

          • madeofpalk 4 days ago

            Lack of optionals/enum/sum types is a huge regression from Typescript to go IMHO.

        • thund 4 days ago

          Honest q, which part is out of date and why? Thanks

          • valcron1000 4 days ago

            Pretty much everything:

            > While C# does have AOT capabilities nowadays this is not as mature as Go's and not all platforms support it

            https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...

            Only Android is missing from that list (marked as "Experimental"). We could argue about maturity but this is a bit subjective.

            > Go also has somewhat better control over data layout

            How? C# supports structs, ref structs (stack allocated only structures), explicit stack allocation (`stackalloc`), explicit struct field layouts through annotations, control over method local variable initialization, control over inlining, etc. Hell, C# even supports a somewhat limited version of borrow checking through the `scoped` keyword.

            > This is meant to be something of a 1:1 port rather than a rewrite, and the old code uses plain functions and data structures without an OOP style.

            C# has been consistently moving into that direction by taking more and more inspiration from F#.

            The only reasonable reason would be extensive usage of structural typing which is present in TS and Go but not in C#.

            • foobarbaz33 2 days ago

              > C# supports structs,

              That's sort of the problem with C#. It couples the type (struct vs class) with allocation. C# started life by copying 1990's Java "everything-is-a-reference". So it's in a weird place where things were bolted on later to give more control but still needs to support the all-objects-are-refs style. C# is just not ergonomic if you need to care about data layout in memory.

              Go uses a C-like model. Everything is a value type. Real pointers are in the language. Now you can write a function that inputs pointers and does not care whether they point to stack, heap, or static area. That function can be used for all 3 types, no fuss.

              • valcron1000 16 hours ago

                > It couples the type (struct vs class) with allocation

                Agree. Where things are allocated is a consumer decision.

                > C# is just not ergonomic if you need to care about data layout in memory

                I disagree. I work on a public high performance C# code and I don't usually face issues when dealing with memory allocations and data layout. You can perfectly use structs everywhere (value types) and pass references when needed (`ref`).

                > Now you can write a function that inputs pointers and does not care whether they point to stack, heap, or static area.

                You can do this perfectly fine in C#, it might not be what some folks consider "idiomatic OOP" but I could not care less about them.

            • neonsunset 4 days ago

              Chances are it was just personal preference of the team and decades of arguing about language design have worn out Anders Hejlsberg. I don't think structural typing alone is enough of an argument to justify the choice over Rust. Maybe the TS team thought choosing Go would have better optics. Well, they won't have it both ways because clearly this decision in my opinion is short-sighted and as someone aptly pointed on twitter they will be now beholden to Google's control over Go should they ever need compiler to support a new platform or evolve in a particular way. Something they would've gotten easily with .NET.

              • vips7L 4 days ago

                On the topic of preference, this thread has really shown me that there is a HUGE preference for a native-aot gc language that is _not_ Go. People want AOT because of the startup and memory characteristics, but do not want to sacrifice language ergonomics. C# could fill that gap if Microsoft would push it there.

                • pebal 4 days ago

                  Just use the fast GC library in C++.

                  • vips7L 4 days ago

                    I don't think C++ has good language ergonomics.

                    • pebal 4 days ago

                      I don't think there is anything faster.

                      • vips7L 4 days ago

                        I highly doubt that bolting a GC on to C++ is going to be any faster than the equivalent C# or Java code.

                        • pebal 4 days ago

                          Doubt is human, but it isn’t always warranted. In C++ can use a concurrent, completely pause‐free garbage collector, where the programmer decides which data is managed by the GC. This enables code optimizations in ways that aren’t possible in C# and Java.

                          • vips7L 4 days ago

                            You realize that is literally not the same thing? I said equivalent code. The whole reason of using a managed language with GC is to not think about those things because they eat up thought and development time. Of course the language that will let you hand optimize every little line will eventually be more performant. I really think you’re discounting both C#’s ability to do those things and just how good Java’s GCs are. Anyway, thats not the point.

                            The point is C++ sucks dude. There is no way that you can reasonably think that bolting a GC on to C++ is going to be a pleasurable experience. This whole conversation started with _language ergonomics_. I don’t care that it’ll save 0.5 milliseconds. I’d rather dig holes than write C++.

                            • pebal 4 days ago

                              Where performance is paramount, developer convenience takes a backseat. Moreover, C++ has evolved significantly in recent years and is now quite enjoyable to use. We’re also discussing a tool in this thread whose performance is critical for developers. Over-simplifying code will ultimately lead to programmers using such solutions being replaced by AI, while the software itself will demand enormous computational power. That’s not the way forward.

                              • vips7L 3 days ago

                                We’re talking about a tool whose performance profile with a managed language is perfectly acceptable as deemed by the choice to use Go. Let alone the fact that this thread you’ve been replying in has never been about achieving the utmost performance.

                                You’re absolutely delusional if you think C++ is enjoyable compared to any managed language or if you think AI is capable of replacing anything.

                                You’ve moved this conversation extremely far off topic and I won’t be replying again.

                                Cheers dude. Good luck with your chat bots and CVE’s from your raw pointers.

                                • pebal 3 days ago

                                  I assume that the original performance profile of these tools was satisfactory to their creators, yet they still decided to rewrite them. I admire programmers who claim that their tools don't need to be maximally optimized. This is likely an attempt to justify the fact that their products aren't exceptionally performant either. Just take a look at the TIOBE rankings, and you'll see how many programmers hold a different view than you.

                        • pjmlp 3 days ago

                          It works for Unreal.

        • whatthemick 4 days ago

          Isn't the AOT story for F# pretty meh? AOT + System.Text.Json requires source generation as best I can tell, which F# doesn't support yet (to my knowledge).

          • neonsunset 4 days ago

            In complex projects like this, Go requires manual scripting and build-time code generation. Arguably, writing a small shim project in C# is much easier. You don't exactly do a lot of JSON serialization in a compiler either way. Other than that - F# "just works" and does not require anything extra. It is just IL after all.

            NativeAOT story itself is also interesting - I noted it in a sibling comment but .NET has much better base binary size and binary size scalability through stronger reachability analysis, metadata compression and pointer-rich binary sections dehydration at a small startup cost (it's still in the same ballpark). The compiler output is also better and so is whole program view driven devirtualization, something Go does not have. In the last 4 years, .NET's performance has improved more than Go's in the last 8. It is really good at text processing at both low and high level (only losing to Rust).

            The most important part here is that TypeScript at Microsoft is a "first-party" customer. This means if they need additional compiler accommodations to improve their project experience from .NET, they could just raise it and they will be treated with priority.

            This decision is technically and politically unsound at multiple levels at once. For example, they will need good WASM support. .NET's existing WASM support is considered "decent" and even that one is far from stellar, yet considered ahead of the Go one. All they needed was to allocate additional funding for the ongoing already working NativeAOT-LLVM-WASM prototype to very quickly get the full support of the target they needed. But alas.

            • pjmlp 4 days ago

              I already hinted on BlueSky that they shouldn't wonder why .NET has adoption problems outside the traditional Windows ecosystem, when decisions like these are taken.

              • neonsunset 4 days ago

                The nightmare of Midori never ends. And especially right as the platform, from the technical standpoint, is getting really good(tm).

    • dustedcodes 4 days ago

      C# has become a poor jack of all trades, trying to be Java, Go and F# at the same time and actually being a shity poor version of all of them. On top of that .NET has become a very enterprisey bloatware. In all honesty, I'm not surprised that they went with Go, as it has a clear identity, a clear use-case which it caters for extremely well and doesn't lose focus with trying to be too many other unrelated things at the same time.

      Maybe it's time to stop eating everything that Microsoft sales folks/evangelists spoon feed you and wake up to the fact that only because people paid by Microsoft to roll the drum about Microsoft products telling you that .NET and C# is oh so good and the best in everything, maybe it's not actually that credible?

      Look at the hard facts. Every single product which Microsoft has built that actually matters (e.g. all their Azure CNCF stuff, Dapr, now this) is using non Microsoft languages and technologies.

      You won't see Blazor being used by Microsoft or the 73rd reinvention of ASP.NET Core MVC Minimal APIs Razor Pages Hocus Pocus WCF XAML Enterprise (TM) for anything mission critical.

      • lossolo 4 days ago

        If not for Microsoft's backing, C# would have died a long time ago. It's just another D, but with a lot more money behind it. It had its chance/momentum, but it failed, and its time has passed. Resurrecting the language now would be very difficult.

    • dimgl 4 days ago

      C# needs an interpreter (.NET runtime) while Go compiles down to a binary. And the toolchain allows you to compile for other architectures fairly easily.

      So that could be a fundamental reason why.

      • bitwize 4 days ago

        .NET has AOT compilation now. There really is no excuse, especially when you consider that C# has a pretty decent type system and Go has an ad-hoc, informally specified, bug-ridden, slow implementation of half of a decent type system.

        • dimgl 4 days ago

          > Go has an ad-hoc, informally specified, bug-ridden, slow implementation of half of a decent type system.

          It's not lost on me that this is a widely used aphorism. The problem is that it's not true in any way shape or form.

          • mervz 4 days ago

            It absolutely is... Go's type system is an abomination.

            • nipah 4 days ago

              Let's not assume C#'s type system is THAT much better, it is also a mess in dozens of cases and is hardly pleasant from a DX standpoint.

          • madeofpalk 4 days ago

            People using pointers when they want to hack in null values points towards a problem in Go's type system.

      • rat9988 4 days ago

        The grand parent was talking about AOT.

grantwu 4 days ago

> By far the most important aspect is that we need to keep the new codebase as compatible as possible, both in terms of semantics and in terms of code structure. We expect to maintain both codebases for quite some time going forward. Languages that allow for a structurally similar codebase offer a significant boon for anyone making code changes because we can easily port changes between the two codebases. In contrast, languages that require fundamental rethinking of memory management, mutation, data structuring, polymorphism, laziness, etc., might be a better fit for a ground-up rewrite, but we're undertaking this more as a port that maintains the existing behavior and critical optimizations we've built into the language. Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.

--https://github.com/microsoft/typescript-go/discussions/411

I haven't looked at the tsc codebase. I do currently use Golang at my job and have used TypeScript at a previous job several years ago.

I'm surprised to hear that idiomatic Golang resembles the existing coding patterns of the tsc codebase. I've never felt that idiomatic code in Golang resembled idiomatic code in TypeScript. Notably, sum types are commonly called out as something especially useful in writing compilers, and when I've wanted them in Golang I've struggled to replace them.

Is there something special about the existing tsc codebase, or does the statement about idiomatic Golang resembling the existing codebase something you could say about most TypeScript codebases?

  • jchw 4 days ago

    > I'm surprised to hear that idiomatic Golang resembles the existing coding patterns of the tsc codebase. I've never felt that idiomatic code in Golang resembled idiomatic code in TypeScript.

    To be fair, they didn't actually say that. What they said was that idiomatic Go resembles their existing patterns. I'd imagine what they mean by that is that a port from their existing patterns to Go is much closer to a mechanical 1:1 process than a port to Rust or C#. Rust is the obvious choice for a fully greenfield implementation, but reorganizing around idiomatic Rust patterns would be much harder for most programs that are not already written in a compatible style. e.g. For Rust programs, the precise ownership and transfer of memory needs to be modelled, whereas Go and JS are both GC'd and don't require this.

    For a codebase that relies heavily on exception handling, I can imagine a 1:1 port would require more thought, but compilers generally need to have pretty good error recovery so I wouldn't be surprised if tsc has bespoke error handling patterns that defers error handling and passes around errors as values a lot; that would map pretty well to Go.

    Most TypeScript projects are very far away from compiler code, so that this wouldn't resemble typical TypeScript isn't too surprising. Compilers written in Go also don't tend to resemble typical Go either, in fairness.

  • nathanrf 4 days ago

    I'm not involved in this rewrite, but I made some minor contributions a few years ago.

    TSC doesn't use many union types, it's mostly OOP-ish down-casting or chains of if-statements.

    One reason for this is I think performance; most objects are tagged by bitsets in order to pack more info about the object without needing additional allocations. But TypeScript can't really (ergonomically) represent this in the type system, so that means you don't get any real useful unions.

    A lot of the objects are also secretly mutable (for caching/performance) which can make precise union types not very useful, since they can be easily invalidated by those mutations.

  • dcre 4 days ago

    In the embedded video they show some of the code side by side and it is just a ton of if statements.

    https://youtu.be/pNlq-EVld70?si=UaFDVwhwyQZqkZrW&t=323

    • 1oooqooq 4 days ago

      to be fair, there's not many ways to implement a token matcher.

      though looking at that flood of loose ifs+returns, i kinda wish they used rust :)

      • dcre 4 days ago

        I’d guess Rust compile times weren’t worth it if they weren’t going to be taking advantage of the type system in interesting ways.

dimgl 4 days ago

I'm really surprised by this visceral reaction to not choosing Rust. Go is a great language and I'd choose it for a majority of projects over Rust just based off of the simplicity of the language and the ability to spin up developers on it quickly. Microsoft is a big corporation.

Why _not_ use Go?

  • homebrewer 4 days ago

    > Why _not_ use Go?

    Because of its truly primitive type system, and because Microsoft already has a much better language — C#, which is both faster and can be more high level and more low-level at the same time, depending on your needs.

    I am a complete nobody to argue with the likes of Hejlsberg, but it feels like AOT performance problems could be solved if tsc needed it, and tsc adoption of C# would also help push C#/.NET adoption. Once again, Microsoft proves that it's a bunch of unrelated companies at odds with each other.

    • triceratops 4 days ago

      I'm inclined to trust the judgement of Hejlsberg, the chief architect of C#, in this matter.

    • 9rx 4 days ago

      > Because of its truly primitive type system

      That is the main reason they gave for why they those chose Go. The parent asked "Why _not_ use Go?"

      • subarctic 4 days ago

        So they like having all the footguns?

        • 9rx 4 days ago

          It was stated from the angle of wanting to ship software sometime this century.

          But there is probably some truth in what you say as well. Footguns are no doubt refreshing after being engrossed in Typescript (and C#) for decades. At some point you start to notice that your tests end up covering all the same cases as your advanced types, and you begin question why you are putting in so much work repeating yourself, which ultimately sees you want to look for better.

          Which, I suppose, is why industry itself keeps ending up taking that to the extreme, cycling between static and dynamic typing over and over again.

          • nipah 4 days ago

            > At some point you start to notice that your tests end up covering all the same cases as your advanced types I don't think this is fair [at all], you use the types precisely to not need to be so overreliable on tests, they either tell some objective truths about your code in compile time (thus reducing the natural need for specific tests) or your type system is simply useless. Either way, I don't think the "industry" is a person that is balancing itself in a pendulum, there are more things under the sun than we can count, and millions of individuals in their everyday projects may not reason things this way, and instead just chose to "well, person X said this language is more maintainable and readable, and I trust X, so I'll use it" (which is a rational thing to do to some extent).

            • 9rx 4 days ago

              > I don't think this is fair [at all], you use the types precisely to not need to be so overreliable on tests

              At the extreme end of the spectrum that starts to become true. But the languages that fill that space are also unusable beyond very narrow tasks. This truth is not particularly relevant to what is seen in practice.

              In the realm of languages people actually use on a normal basis, with their half-assed type systems, a few more advanced concepts sprinkled in here and there really don't do anything to reduce the need for testing as you still have to test around all the many other holes in the type system, which ends up incidentally covering those other cases as well.

              In practice, the primary benefit of the type system in these real-world languages is as it relates to things like refactoring. That is incredibly powerful and not overlapped by tests. However, the returns are diminishing. As you get into increasingly advanced type concepts, there is less need/ability to refactor on those touch points.

              Most seem to agree that a complete type system is way too much (especially for general purpose programming), and no type system is too little; that a half-assed type system is the right balance. However, exactly how much half-assery is the right amount of half-assery is where the debate begins. I posit that those who go in deep with thinking less half-assery is the way eventually come to appreciate more half-assery.

              > I don't think the "industry" is a person

              Nobody does.

              • nipah 4 days ago

                Hmmm, I think this is an interesting discussion. There's many sides I need to respond here, maybe I will not be able to cover everything but here I go.

                See, I fundamentally disagree that those languages are "unusable beyond very narrow tasks", because I never stated that only a complete and absolutely proven type system can provide those proofs. In fact, even a relatively mid-tier (a little bit above average) type-system like C#'s can already provide enormous benefits in this regard. See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects, in C# you don't have to do this (because the type system dictates the shape). You also have to be very careful around possibly null objects and values, which in a language with "proper" nullable types (and support from it in the type system and static checkers) like C# can be lowered vastly (if you use the resource, naturally). C# is also a language that "brings the types into runtime" through reflection, so it will even bring you things that you don't need to test in your code (only when developing the library) like reflection for example (you will not see things that are meant to assert shapes, like 'zod' or 'pydantic' in C# or other mid-tier typed languages for example). C#'s type system also proves many things about the safety of your code, for example you basically never need to test your usage of Spans, the type system and static analysis will already rule out most problematic usages of those things. You also never need to test if your int is actually a float because some random place in your code it was set to be so (like in JS), you also never need to test against many other basic assumptions even an extremely basic type system would give you (even Go's one).

                This is to say that, basically, this don't hold true for relatively simple type systems. I'm also yet to see this holding true for more advanced ones, for example: Rust is a relatively well used language for a lot of low-level projects. I never saw someone testing (well bounded safe) rust code for basic shapes of types, nor for the conclusions the type system provides when writing on it. For example, testing if the type system was really able to catch that ownership transference happening here, of it is really safe to assume that there's only one mutable reference to that object after you called that method, or if the destructor of the object is really running in the end of the scope of the function, or even if the overly complex associated type result was actually what you meant it to be (in fact, if you would ever use those complicated types, it would be precisely to have very strong compile-time guarantees that both a test would not be able to cover for -- entirely, and that you would not write unit tests specifically for in the first place). So I don't think it is true that you need a powerful type system to see the reduction in tests that you would need to write in a completely dynamically typed language, nor I think it is true when you start having really powerful type constructs, that you will come to this conclusion """start to notice that your tests end up covering all the same cases as your advanced types""". I also don't think that you need to go to the extreme of this spectrum to see those benefits, they appear gradually and increase gradually as you move towards the end (when you end up with more extremely uncommon things like dependent typing, refinement types or effect systems).

                I also certainly don't agree that it does matter that "most people" think or don't think about powerful type systems and the languages using them, it matters more that the right people are using them, people that want to be benefitted from this, than the everyday masses (this is another overly complex disccussion tho). And while I can understand the feelings you have towards the "low end of half-assery type systems", and even agree to a certain reasonable degree (naturally, with my own considerations), I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason). It is enough to recognize that a half-assery type-system usually gets the job done and that's it, completely fine and okay, it may even be faster to write, instead of trying to justify that we should "pursue primitive type systems" because of the fact that we can do things well on them. Maybe I'm digressing to much, it's hard to respond to this comment in a satisfactory manner.

                >> I don't think the "industry" is a person

                > Nobody does.

                Yeah, this was not a very productive point of mine, sorry.

                • 9rx 4 days ago

                  > I fundamentally disagree that those languages are "unusable beyond very narrow tasks"

                  Then why do you think nobody uses them (outside of certain narrow tasks)? It is hard to deny the results.

                  The reality is that they are intractable. For the vast majority of programming problems, testing is good enough and far, far more practical. There is a very good reason why the languages people normally use (yes, including C# and Rust) prefer testing over types.

                  > See, when you test for something like raw JavaScript, you end up testing things that are even about the shape of your objects

                  Incidentally, but not explicitly. You also end up incidentally testing things like the shape even in languages that provide strict guarantees in the type system. That's the nature of testing.

                  I do agree that testing is not well understood by a lot of developers. There are for sure developers who think that explicitly testing for, say, the shape of data is a test that needs to be written. A lot of developers straight up don't know what makes for a useful test. We'd do well to help them better understand testing, but I'm not sure "don't even think about it, you've got a half-assed type system to lean on!" get us there. Quite the opposite.

                  > it matters more that the right people are using them

                  Well, they're not. And they are not going to without some fundamental breakthrough that changes the tractability of using languages with an advanced (on the full spectrum, not relative to Go) type system. The tradeoffs just aren't worth it in nearly every case. So we're stuck with half-assed type systems and relying on testing, for better or worse. Yes, that includes C# and Rust.

                  > I don't think glorifying mediocre type systems is the way to go (like many people usually do, for some terrifying reason).

                  Does it matter? Engineers don't make decisions based on some random emotional plea on HN. A keyboard cowboy might be swayed in the wrong direction by such, but then this boils down to being effectively equivalent to "If we don't talk about sex maybe teenage pregnancy will cease." Is that really the angle you want to go with?

                  • nipah 3 days ago

                    > Then why do you think nobody uses them (outside of certain narrow tasks)? It is hard to deny the results.

                    > The reality is that they are intractable. For the vast majority of programming problems, testing is good enough and far, far more practical. There is a very good reason why the languages people normally use (yes, including C# and Rust) prefer testing over types.

                    Deny what results? Do you have some kind of formal demonstration that they are impossible to use outside of those "certain narrow tasks" (unknown)? Or do you have proof that NOBODY use them for more than those "certain narrow tasks"? Otherwise this is more "I feel like it" than something I would even need to justify deeply.

                    Also, with Rust this is certainly false, most people that I've saw using it (and myself) don't overly test everything in it besides more complex behavior (which types hardly can prove it is correct), but it eliminates the need for a whole suit of smaller tests that would be necessary in less powerful languages (it is literally regarded as one of the languages where "if it compiles, it works" -- or "generally works" for a reason).

                    > Incidentally, but not explicitly. You also end up incidentally testing things like the shape even in languages that provide strict guarantees in the type system. That's the nature of testing.

                    Now I want some proof to it. Make an example test that "incidentally tests things like the shape" in C#, please. I've seen a good bunch of codebases in C# and I'm pretty sure I never saw something even remotely like this.

                    > I do agree that testing is not well understood by a lot of developers. There are for sure developers who think that explicitly testing for, say, the shape of data is a test that needs to be written. A lot of developers straight up don't know what makes for a useful test. We'd do well to help them better understand testing, but I'm not sure "don't even think about it, you've got a half-assed type system to lean on!" get us there. Quite the opposite.

                    Now this point is getting lost, you changed from:

                    > At some point you start to notice that your tests end up covering all the same cases as your advanced types, and you begin question why you are putting in so much work repeating yourself, which ultimately sees you want to look for better.

                    To "I know better than a lot of developers how to test", which don't make any sense to me. You either has the same baseline testing knowledge of this "lot of developers", and hence reach similar conclusions in regards to testing (what I quoted), or you have a better understanding of it than them (and your conclusions are merely based on your own perception of testing). I don't think those points are free to take, you would need to justify this a little bit more, and I'm sure """don't even think about it, you've got a half-assed type system to lean on!""" was not the core of my point, nor a faithful representation of what I said.

                    > Well, they're not. And they are not going to without some fundamental breakthrough that changes the tractability of using languages with an advanced (on the full spectrum, not relative to Go) type system. The tradeoffs just aren't worth it in nearly every case. So we're stuck with half-assed type systems and relying on testing, for better or worse. Yes, that includes C# and Rust.

                    I would not call Rust's type system 'half-assed' tho, it is very compatible with a bunch of ML languages, it is really a very sophisticated type system with HM type inference, generic associated types, traits and more powerful things. Comparing it to C# would be unreasonable. I may also be a little bit mean to C#, it has a "mid-tier, but sufficiently good type system" for many purposes, my main problems with it are regarding to the type inference (and the lack of some basic features), but it has generics since early versions, it has interfaces, classes, subtyping, recursive type constraints, extension methods, deterministic destructors, scoped local definitions and a bunch of small useful resources. It is surely mid-tier in many aspects, but not something trivial and I don't think you can put them all in the same basket. Either way, I also don't think that you got my point: I said it matters more that the right people are using them, precisely because those are the people that would make good use of those type systems. As a very simple example (and I consider Rust type system a very powerful one in this context) it was interesting to see that someone like Asahi Lina said the Rust language and it's features were useful for making a GPU driver, that she experienced less problems common in C (a language with a way smaller and simpler type system) and that it was having some positive effects on it. Surely, most software is not written in Rust, but the ones that are, and this is what matters, are being developed by the right people that would use them right. This is another point, as I stated earlier, but you responded to it so I'm giving a better exploration on the surroundings of what I meant here.

                    > Does it matter? Engineers don't make decisions based on some random emotional plea on HN. A keyboard cowboy might be swayed in the wrong direction by such, but then this boils down to being effectively equivalent to "If we don't talk about sex maybe teenage pregnancy will cease." Is that really the angle you want to go with?

                    It absolutely does matter. And I do believe we should talk less about mundane things and that glorifying bad ways of living can have a terrible influence in teenage brains (and even in adults in many cases), but this is also another discussion. My point is that people don't argue emotionally like they were arguing emotionally, they argue emotionally like they were right. They are (generally) not saying "well, see, I really love Go and its simplicity, so because of my personal preferences I'm saying that other languages are bad", they are saying "see, as we obviously values simplicity, and Go is simpler than that X language, Go is better than X language" (which is the shape of arguments I usually see, not ipsis literis, but in implications and the style of pointing things), and this is much more dangerous than any "purely emotional plea" (and also, most software engineers are not masters of argumentation that can dissect something like this and find all the intricate problems and possible fallacies behind it, they will believe what is most believable at the moment for them and that's generally it).

                    • 9rx 3 days ago

                      > Deny what results?

                      The results of software written in languages with robust type systems. Having mathematical guarantees that your program is correct is a good place to be, but climbing the mountain to get there is, well...

                      > Do you have some kind of formal demonstration that they are impossible to use outside of those "certain narrow tasks" (unknown)? Or do you have proof that NOBODY use them for more than those "certain narrow tasks"?

                      See, now you're starting to understand exactly why these languages are intractable. But perhaps we can dumb things down to my mere mortal level: Why don't you use those languages for regular programming tasks?

                      > To "I know better than a lot of developers how to test"

                      How, exactly, did you reach that conclusion? You don't have to be good at something to recognize when people are bad at something.

                      > but it eliminates the need for a whole suit of smaller tests that would be necessary in less powerful languages

                      What kind of small tests are you envisioning?

                      Furthermore, even if we grant that statement as being true for the sake of discussion, there is still the problem that the primary intent of tests is to offer documentation. That the documentation is self-validating is the reason you're not writing it in Microsoft Word instead, but that is really a secondary benefit.

                      If you defer to the type system then you're moving some, but not all, of the documentation into the type system, fragmenting the information. Is that fair to other developers? In reality, you're going to want to write the tests anyway for the sake of consistency and completeness. A programmer needs to deliver something that works, of course, but also something that does a good job of communicating to other developers what is going on. Programming is decidedly not a solo activity.

                      Sure, in an ideal world you could document the entire program in the type system, but the languages people normally use simply don't have what it takes to enable that. They lean on testing instead. Worse is better, I suppose.

                      > Make an example test that "incidentally tests things like the shape" in C#, please.

                      We should probably talk about my fee structure first. I don't want you coming back crying that it was too much when I send you the bill for the work performed.

                      That said, under my professional duty to act your interest, I expect you would far better served thinking about how you might go about avoiding testing the shape given a random useful test. You don't really need my services here.

                      > I would not call Rust's type system 'half-assed' tho

                      It doesn't even have proper support for something basic like value constraints, let alone more advanced concepts. It has more features than Go, but I fail to see how that offers transcendence beyond half-assery. I'll grant you that it is less half-assed, just as I did earlier.

                      • nipah 2 days ago

                        > If you defer to the type system then you're moving some, but not all, of the documentation into the type system, fragmenting the information. Is that fair to other developers? In reality, you're going to want to write the tests anyway for the sake of consistency and completeness. A programmer needs to deliver something that works, of course, but also something that does a good job of communicating to other developers what is going on. Programming is decidedly not a solo activity.

                        As for this, as I said I really like having the type system as my documentation, and I don't know what exactly you are saying with "is that fair to other developers", this is not only fair but very useful, in fact this is WHY people really like good typed libraries in TypeScript world, they make using the thing MUCH easier and more guided, to the point that you don't even need to read the real documentation written in text that much as just exploring in the code editor. As a very good example, I literally make for my teammates many useful libraries and they love using them, the convenience they provide, and I find joy when they find "oh, that's amazing, your library can do this exact thing I was needing to do because you thought about it before", this makes their life easier, this makes my life easier, and even when they use the library a bit wrong it will still work because I made everything flow well and have easier paths for gradual usage when you need. It is not only fair, it is good if you know how to do it well, and I rarely need to write tests for my own code more than I need to write for my libraries (if they are right, generally speaking the code that uses it has way less easy to make flaws, and thus I reduce the number of tests needed in the end of the day). As if programming is or not a "solo activity", this depends on many things, I certainly have hundreds of solo projects of mine and I love working on them, and I also have my projects that are developed along with other people, and I love them as well. Programming for me is at the same time a form of expression, and art and a job at the same time.

                        > Sure, in an ideal world you could document the entire program in the type system, but the languages people normally use simply don't have what it takes to enable that. They lean on testing instead. Worse is better, I suppose.

                        As I said before, Rust does have that, and many modern languages have more and more on that. I understand that the world mostly uses less powerful languages, but this is not their fault, most languages have dozens of legacy projects behind them and it is really hard to let go (I mean, there are important programs written in COBOL nowadays, and I assume the language is even worse than your "worse is better", but people still need to use it). I'm not advocating for abandomning those projects, nor that languages with worse type systems are terrible, but that you should simply not say "it is better" just because people need to use it because of reasons, nor glorify their mediocrity. Mediocrity should be enhanced (and that's why even Go, a "simple language" is still gaining features from time to time, even it is not a crystalized stone of specific directives that will never change, and some day it will have more and more features as time moves on; all languages are slowly evolving, even COBOL itself, so if being "worse" is a goal, I think most of them are not following that goal).

                        > We should probably talk about my fee structure first. I don't want you coming back crying that it was too much when I send you the bill for the work performed.

                        > That said, under my professional duty to act your interest, I expect you would far better served thinking about how you might go about avoiding testing the shape given a random useful test. You don't really need my services here.

                        Oh, sorry, I thought this was a discussion where people was really trying to reach the truth, not some sort of "pay me if you want to see my point" kind of thing. I'm more of a "talk is cheap, show me the code guy", and I will certainly not pay someone to justify their own onus probandi. If this is how things will be, I think a discussion with you furthermore would be pointless.

                        > It doesn't even have proper support for something basic like value constraints, let alone more advanced concepts. It has more features than Go, but I fail to see how that offers transcendence beyond half-assery. I'll grant you that it is less half-assed, just as I did earlier.

                        And this is obviously an extreme form of exaggeration. I literally coded a basic working numeric system in rust type system with mathematical operations just for fun (and there are crates that does that), if this don't imply the language has a very powerful one I don't think anything would. Obviously, I'm not saying that Rust has THE MOST powerful type system, I never once implied that, but it is not "half-assery" in any way, it is also many times above what Go is able to do. It's not only "more features", it is fundamentally more open to changes and advancements than it is.

                        --------------- (part 2)

                        • 9rx 2 days ago

                          > I really like having the type system as my documentation, and I don't know what exactly you are saying with "is that fair to other developers"

                          Because you didn't bother to read what I wrote, again, that's why. I suggested it is not fair to fragment the information. If you want to duplicate the information, go nuts. But that rounds us back to the very beginning where we opened with the topic of growing tired of repeating yourself...

                          > I thought this was a discussion where people was really trying to reach the truth, not some sort of "pay me if you want to see my point" kind of thing.

                          Yes, it is a discussion, not a make work project. If you want to deliver a point in that discussion, just do it. No need for stupid games.

                          > I generally don't care that much about them, because I don't find the need for them most of the time

                          Exactly. Same reason why nobody uses them. [I know, I know, you think this needs to be proven. But it really doesn't. That is silly.]

                          However, this means that you also recognize that there is a line where more typing is not worth it. So, where, exactly is that line? You say "here", but then I'll say "but no, you also need this". You'll say "that is really not that important", but I'll say "no, it is!" We could go on like that forever.

                          Eventually a sane person will arrive and simply say: "It depends." And maybe someday you too will understand that statement.

                          > And this is obviously an extreme form of exaggeration.

                          It is not. I will grant you that it is the lowest hanging fruit for testing, so in practice it is probably not worth the effort, but it is a great indicator of how the type system is half-assed. If it truly believed in not half-assing it, it would be there, and is in languages with robust type systems.

                          > For starters, things like "is the shape of this data correct"

                          When would a test like that ever be necessary? If the shape of your data is wrong somehow, the "documentation" tests won't be able to succeed either, so you implicitly find out that the shape is wrong anyway. There is no need to repeat yourself here. Not only is there no need for repetition, worse, tests like that usually end up making the test suite brittle and hard to manage.

                          > First, I don't think this is true [at all].

                          I gathered. This seems to be the source of contention around the testing topic when we cannot even agree what testing is. From your vantage point I can understand how you are unable to recognize the overlap. But it remains that if you write useful tests, you implicitly also end up testing what the type system covers.

                          The type system is still incredibly beneficial for other reasons, of course. To a point. But, again, the returns are diminishing.

                      • nipah 2 days ago

                        > The results of software written in languages with robust type systems. Having mathematical guarantees that your program is correct is a good place to be, but climbing the mountain to get there is, well...

                        It is harder, obviously, but it still tends to be a matter of understanding most of the time. But I can understand what you mean now, you are not saying it is "impossible" to do so, but that it is very hard to do so, harder than using testing (even with a lower guarantee level). If this is the case I can buy it partially, but then your point would not be as strong, I mean, we need to make a language with a good type system that can prove things reasonably well and be not that hard to use. This is more of a call to action than an impossibility question.

                        > See, now you're starting to understand exactly why these languages are intractable. But perhaps we can dumb things down to my mere mortal level: Why don't you use those languages for regular programming tasks?

                        Following my previous response: I generally don't care that much about them, because I don't find the need for them most of the time (this is part of why I said the mostly the right people matter, instead of everyone), but also because I find many of them designed in a way that is not ergonomic enough for me. This is more a design problem (that most languages have) than something else. But also, I generally use for my everyday preferred programming tasks languages that have at least powerful enough type systems, like Rust or Swift, F#, C# (because it is on the higher level of mid-tier), or even Kotlin (that I like very much, it don't has everything I like but it is closer to Swift and has a better compiler tooling) than I use languages that have not that good type-systems and resources (like Go or C), in this sense I pretty much live to my own standards, I just write tests for the things that matter and I use the type systems of those languages to prove things about my code. It works very well, and I very rarely experience problems with this, it is really satisfying for me, but I get not everyone likes this way of coding.

                        > How, exactly, did you reach that conclusion? You don't have to be good at something to recognize when people are bad at something.

                        Wdym? You don't need to be good at something, but you surely need to [understand] [better] than most people to realize those same people are worse in [understanding] and [applicability] than you.

                        Even knowing that you don't know something is a sign that you have a better understanding of that thing than others, but you cannot overlook to other people and say they are dumb if you consider yourself to be at the same level than them, this would be irrational to believe (because you have virtually the same knowledge and limitations).

                        So, if you say "most developers don't understand testing" I must get from your words, that you at least know what is a [better understanding of testing], or that you are at least more [aware] than other people about the limitations of their own testing (which implies priviledged knowledge).

                        But if it is the case, then an affirmation like "We'd do well to help them better understand testing" would be just pure insanity. If you believe "we" can help people understand better something, you must understand this [better] than them, there is no teacher that, ceteris paribus, know less than his students in the specific matter that he's teaching and that the student has flaws on it.

                        > What kind of small tests are you envisioning?

                        For starters, things like "is the shape of this data correct" (in the base level of any strongly typed language), and things like "was this object unitialized in the end of the scope as intended" (on the destructors feature), and things like "did I forgot to call some method, make some state change or do something else" (on the typestate concept). And even things like "is this function being possibly misused" (with type-system guarantees about mutability, nullability, aliasing and owning references, you can remove a whole class of specific tests like "is this argument invalid", "is this object being mutated outside of this function and thus being possibly in an invalid state at some point in this function", "can I be sure I can optimize this code by doing in place mutations without breaking other parts of the whole software that would be depending on it", etc). Obviously this is kind of abstract, but this is because testing is usually a pursuit of turning those generally abstract concepts onto something practical like "when the user is loging in, is the returned object in a consistent state, are the services, managers and encryption tools being properly used" or "is the customer in a valid known state at this point in the code that is significantly more complex? Can I be sure that it is not null and thus I don't need to test against it in some point?" etc.

                        > Furthermore, even if we grant that statement as being true for the sake of discussion, there is still the problem that the primary intent of tests is to offer documentation. That the documentation is self-validating is the reason you're not writing it in Microsoft Word instead, but that is really a secondary benefit.

                        First, I don't think this is true [at all]. There are many kinds of tests, and the "intent" behind them is different. For example, I can say that the primary intent of tests is to ensure the given problem and the expressed code are aligned, to check if what I did is really doing what I intended and that I did not commit any logical errors when expressing the thing, or that I was unable to express correctly what I intended to express (like if, even if everything was correct from a purely computational standpoint, I was able to effectively reach the state I was trying to consciously reach). I think a [side effect] of testing is that it turns out to be pretty good documentation for many problems, but not that it is the [intended] goal of it. Maybe if you are developing it with this goal in mind, but not as an objective unique truth.

                        And also, even if I accept this: type systems are extremely good ways of documenting the things you are doing. I saw many times haskell programmers say the use the types of the functions they want to call, or the things they want to make, as the way to find appropriate usages of that thing (i.e. if they need to convert a string to an int, they can search in their editor for [String -> Maybe Int] and they will find many useful functions, and probably the one they want, and everything would be very clear for them in this sense). Good types lead the programmers using them to correct code, because they make it very hard (or, sometimes, even impossible) to express incorrect programs using those types. Part of the reason I really like good type systems is that I am a very forgetful person, if I write something the chances of me forgetting about it later down the line are very high, so I really like the sensation of coming back to a codebase and finding all the clues I left for myself (i.e. the types) and discovering that for use that I need that other thing (or else it won't compile), and that function I'm calling can error in some specific signalized ways in the types (and now I remember, I need to do this and this) and how everything fits well like a good and very comfortable puzzle. This is my ideal documentation, and tests are also important for me to remember more forms of how I used this code in practicality sometimes, but many times the types are really everything I need. This, obviously, is more anedoctal, this is how I view things, but I think many people would agree with me on this, and that this is not absurd at all as a conclusion.

                        -------------- I was bitten by the char limits here, so I'll put in parts (part 1)

        • umvi 4 days ago

          Overly expressive type systems have way more potential for footguns than simple type systems. In fact, I would say that overly expressive type systems make it easy to create unmaintainable code (still waiting on this showstopping bug which nobody can debug because it uses overly expressive types in TS: https://github.com/openapi-ts/openapi-typescript/issues/1769)

          • nipah 4 days ago

            I don't think TypeScript is an example of what people would call a "properly expressive type system". Sure, it is very expressive, but it is made to cover all the gaps JavaScript as a language has in a generally type safe manner, and this calls for an EXTREMELY complex and open type system, much more than most languages would ever have, so I don't think this is really appliable as an example. The gap between maintainable code and unmaintainable one sits between the chair and the screen, not in the type system of the language the person is using, the language merely makes that person more or less able to encode more things in specific places that can become unmaintenable (and anecdotally, most unmaintenable code I know don't even use complex type system features, it's just plain old messy state mutating things scatered all around).

      • nipah 4 days ago

        This is not "the main reason", lol, it was never stated as such. The type system could be way more powerful and, having the same general features they would probably had still picked it up.

        • 9rx 4 days ago

          What realistic contender doesn't have all the same general features as Go? It doesn't exactly have many to choose from, none of them particularly esoteric, and most of them bare necessities required of any language.

          Let's be real: You can absolutely write "Go-style" code in just about any language that might have been considered for this. But you wouldn't want to, as a more advanced type system enables entirely different idioms, and it is in bad faith to other developers (including future you) to stray too far from those idioms. If ignoring idioms doesn't sound like a bad idea on day one, you'll feel the hurt and regret soon enough...

          Go was chosen because the idioms are generally in alignment with their needs and those idioms are wholly dependent on the shape of its type system.

          • nipah 3 days ago

            > What realistic contender doesn't have all the same general features as Go? It doesn't exactly have many to choose from, none of them particularly esoteric, and most of them bare necessities required of any language.

            I would say structural typing is very "esoteric" for most strongly typed languages actually, but this is not a problem.

            And proceeding, the implications of your response is very strange. See, your point is essentially saying that "we should use Go, because it entails writting in only one idiom, and writing in languages that enables you to do more idioms -- more powerful languages -- is bad faith to other developers", but Hejlsberg himself said he chose go because of specific characteristics of the compiler that was already written, not because it is "the ideal one for every single prospect", while your point has implications that are absolutely more general. So I don't think he would agree with you that this was his reasoning for using go (the "don't have other idioms" thing), I also don't think this whole "more idioms" thing even make sense, but this is not needed to respond to this.

            • 9rx 3 days ago

              > Hejlsberg himself said he chose go because of specific characteristics of the compiler

              He did, but much more importantly Cavanaugh said that he chose Go because of it having similar semantics and code structure. In other words, idiomatic Go is similar to how the original code was written. While I am sure Hejlsberg's input was icing on the cake, it was the not the ultimate determinator. C# having the best compiler in the world on every front still wouldn't have ticked the boxes the guy in charge needed to tick.

              > So I don't think he would agree with you that this was his reasoning for using go

              He may not, but it also wasn't his choice in the end anyway, so its a bit strange that you are leaning on his word.

    • Cthulhu_ 4 days ago

      "depending on your needs" indeed - does the TSC compiler need a stronger type system?

      Advanced type systems are guard rails to spot and avoid issues early on, but that role can be fulfilled by tests as well, and Typescript has a LOT of tests and use cases that will be flagged up. Does it need a strong type system for the internal tooling on top of that?

      I'm not an authority on the matter and know nothing about the compiler's internals, but I'm confident in saying that the type system is good enough for this use case.

  • dmix 4 days ago

    Do we need to have these conversations weekly?

    • dimgl 4 days ago

      I wouldn't be asking if there wasn't a visceral reaction from Rust devs. I must have missed previous discussions on other threads.

      • J_Shelby_J 4 days ago

        Is this visceral reaction in the room with us now?

        Edit: I have reached the bottom of the thread and still have not seen this visceral reaction mentioned by the OP.

        • dimgl 4 days ago

          https://github.com/microsoft/typescript-go/discussions/411

          There's more reactions here. I think devs have lost the plot, tbh.

          • Cthulhu_ 4 days ago

            I closed that thread, the maturity level in there is... highly variable. There's a lot of kneejerk statements in there.

            Meanwhile, this decision was made or led by one of the few people that developed multiple popular programming languages. I trust his opinion / decision more than internet commenters, including my own.

          • gauge_field 4 days ago

            I am not sure if we see the same thread. There is one reaction from "Rust" dev (who seems have a very new account on github) on why not rust. Most of the others seem to be from C# side. The pattern also seems to be the same on reddit thread. There is one post about why not rust, equally (or more depending how you weigh) is how other people react to this news.

            What is weird is how much people talk about how other people react. Modern social media is weird

            • hitekker 4 days ago

              There's at least 3 top-level threads criticizing the decision not to rewrite in Rust. Including a RIR banner ad posted in the replies.

              Holy Language Wars are a spectator sport as old as the internet itself. It's normal to comment on one side fighting another. What's weird is pretending not to see the fighting

              • Cthulhu_ 4 days ago

                I do like seeing there's threads (on here and in the github page) advocating for or asking about C#, it's healthy to bring up different languages.

                But, advocates for language X need to make sure they read and understand the requirements and tradeoffs, which could probably have been communicated better.

0xcb0 4 days ago

After years of PHP, I came to typescript nearly 4 years ago (for web front and backend development). All I can say is that I really enjoy using this programming language. The type system is just about enough to be helpful, and not too much to be in your way. Compiling the codebase is quite fast, compared to other languages. With a 10x, it will be so much fun to code.

Never been a big fan of MS, but must say that typescript is well done imho. thanks for it and all the hard work!

  • zem 4 days ago

    microsoft has historically been great at programming languages. qbasic, visual basic, c#, and f# are all excellent.

dustedcodes 4 days ago

Meanwhile .NET developers are still waiting for Microsoft to use their own "inventions" like Blazor, .NET MAUI, Aspire, etc. for anything meaningful. Bless them.

  • Cthulhu_ 4 days ago

    "anything meaningful"? Does this mean that those technologies aren't used for anything meaningful, or that you're simply not aware of them?

    (I'm simply not aware of them but that also means I won't make any statements about these)

  • Shocka1 a day ago

    Doesn't matter. Some could care less what MSFT is doing - a Blazor app I developed in super-speed time has collected 40k transactions in the last 4 months. It did it's job.

  • zuhsetaqi 4 days ago

    Aspire is made with Blazor

  • lomase 3 days ago

    Bing is made with ASP .net

zestyping 4 days ago

That's a pretty misleading clickbait title. TypeScript isn't getting 10x faster; the TypeScript compiler is getting 10x faster.

I would argue it needs editing, as it violates the HN guideline:

> use the original title, unless it is misleading or linkbait; don't editorialize.

  • campers 4 days ago

    There isn't a TypeScript runtime, it is just a JavaScript/ECMAScript compiler/transpiler with a type checking and language server

  • salmonellaeater 4 days ago

    My initial interpretation of the title was that the TS team was adding support for another, faster, target such as the .NET runtime or native executables. The title could use some editing.

  • Cthulhu_ 4 days ago

    Yeah, it's a bit linkbaity as it implies that the runtime is 10x faster; just adding the word 'compiler' or 'type checker' to the title would fix it.

    • agos 3 days ago

      There isn't a single runtime, it's quite clear that Typescript here means the compiler

aib 4 days ago

Just tried it on our codebase. Getting over a thousand errors, a good portion of which seem to be:

    ../../../tmp/typescript-go/built/local/lib.dom.d.ts:27982:6 - error TS2300: Duplicate identifier 'KeyType'.
    27982 type KeyType = "private" | "public" | "secret";
               ~~~~~~~

      ../../../tmp/typescript-go/built/local/lib.webworker.d.ts:9370:6 - 'KeyType' was also declared here.
        9370 type KeyType = "private" | "public" | "secret";
                  ~~~~~~~
Probably an easy fix.

Running it in another portion results in SIGSEGV with a bad/nil pointer defererence, which puts me in the camp of people questioning the choice of Go.

  • wiseowise 4 days ago

    > Running it in another portion results in SIGSEGV with a bad/nil pointer defererence, which puts me in the camp of people questioning the choice of Go.

    They would be still setting up the project, if it was Rust.

  • Cthulhu_ 4 days ago

    It's very early days (perhaps too early?); running into issues caused by what very well may be an automated conversion is to be expected, and not down to the language choice.

    Why not find out what's going wrong and submit a bug report / merge request instead of immediately dismissing a choice made by one of the leading authorities in programming languages in the world?

  • yohannesk 4 days ago

    If you are wondering why not Rust instead of Go, they outline why Rust was not chosen. This is a port not a reimplementation. Many of the data structures can not easily be ported to Rust, such as Nodes with cyclic dependencies. Check the longer interview here: https://www.youtube.com/watch?v=10qowKUW82U&ab_channel=Michi... Also, I think the discussion on esbuild's choice of language applies here as well as it has a large similarity. You can find it here on hn

haxiomic 4 days ago

Sounds like they're automatically generating Go code from ts in some amount [0]. I wonder if they will open the transpilation effort, in this way you'd create a path for other TypeScript projects to generate fast native binaries

Opened discussion [1]

- [0] https://github.com/microsoft/typescript-go/discussions/410

- [1] https://github.com/microsoft/typescript-go/discussions/467

adriancooney 4 days ago

I know this is a port but I really hope the team builds in performance debugging tools from the outset. Being able to understand _why_ a build or typecheck is taking so long is sorely missing from today's Typescript.

  • slackerIII 4 days ago

    Yes, 100% agree. We've spent so much time chasing down what makes our build slow. Obviously that is less important now, but hopefully they've laid the foundation for when our code base grows another 10x.

slackerIII 4 days ago

This is amazing. Everyone that picked TS for a big project was effectively betting that someone would do this at some point, so it's incredible to see it finally happen. Thanks to everyone involved!

electroly 4 days ago

I wonder, for a Microsoft project, why not C#? Would have been a nice win for the home team.

  • pier25 4 days ago

    Anders explain why Go in this podcast:

    https://youtu.be/ZlGza4oIleY?t=1005

    • dagw 4 days ago

      TL:DR;

      - Native executable support on all major platforms

      - He doesn't seen to believe that AOT compiled C# can give the best possible performance on all major platforms

      - Good control of the layout of data structures

      - Had to have garbage collection

      - Great concurrency support

      - Simple, easy to approach, and great tooling

      • ramon156 4 days ago

        So wild that most of these points were something C# was supposed to be good at, and they all boil down to "its just not as good in C# as in Go"

        • dagw 4 days ago

          Yea, sounds like cross platform AOT compiled C# not being mature and performant was a big reason that C# was rejected.

          One other thing I forgot to mention was that he talked about how the current compiler was mostly written as more or less pure functions operating on data structures, as opposed to being object oriented, and that this fits very well with the Go way of doing things, making 1:1 port much easier.

          • pier25 4 days ago

            > sounds like cross platform AOT compiled C# not being mature and performant was a big reason

            I don't think it was the performance. C# is usually on par or faster than Go.

            Could be the lack of maturity but also that I believe Go produces smaller binaries which makes a lot of sense for a CLI.

            • xandrius 4 days ago

              I never heard C# being fasted than Go, except on certain batched jobs and even there Go can be better.

              • pier25 4 days ago

                For example, look at the Techempower benchmarks.

                I benchmarked HTML rendering and Dotnet was 2-3x faster than Go using either Templ or html/template.

                Etc.

                • tacticus 4 days ago

                  The c# benchmarks where they didn't use the framework to do any of the actual templating?

                  those hardcoded byte arrays are how everyone does templating everywhere right?

                  or are you talking about after they changed back their "platform" test to not do that and is substantially slower than go

                  https://dusted.codes/how-fast-is-really-aspnet-core

                  • neonsunset 4 days ago

                    This is sorely outdated. Although for anyone with an axe to grind dustin’s articles are convenient enough.

                  • pier25 3 days ago

                    The HTML benchmarks I did myself with Razor pages.

                    The link your provided is severely outdated using data from Round 21 and .NET 6.

          • fabian2k 4 days ago

            Immaturity of native AOT sounds like a likely culprit here. If they're after very fast startup times running classic C# is out. And native AOT is still pretty new.

            You can write pure functions operating on data structures in C#, it's maybe not as idiomatic as in Go, but it should not cause problems.

            • dagw 4 days ago

              it's maybe not as idiomatic as in Go, but it should not cause problems.

              Based on interviews, it seems Hejlsberg cares a lot about keeping the code base as idiomatic and approachable as possible. So it's clearly a factor.

        • osigurdson 4 days ago

          I don't really get the OOP arguments from Anders. You don't need to do OOP stuff in C# - just write a bunch of static functions if you want. However, I totally get the AOT aspect. Creating a simple cli app meant for wide distribution in .NET isn't great because you either have to ship the runtime or try to use AOT which is very much a step out. I have come to the same conclusion and used Go on some occasions for the same reason despite not knowing it very well.

          If doing a web server, on the other hand, these things wouldn't matter at all as you would be running a container anyway.

          • neonsunset 4 days ago

            A “step out” is a single flag passed to “dotnet publish” or set in the project manifest.

            • osigurdson 3 days ago

              That, and then deal with all the gotchas.

              • neonsunset 3 days ago

                There are none in new projects. There are few in existing ones. I guess it is difficult to have a conversation with someone who is committed to looking for the aforementioned "gotchas".

                • osigurdson 3 days ago

                  I generally agree that C# == Java == Go in most aspects and have far more and generally think C# is nicer to use and probably faster for many use cases. But, it is not the same as Go at all in terms of AOT. In golang that is just the default and always works - basically on all platforms, in C#, that is not the case. Do you really want to find out that a popular library uses some reflection and therefore doesn't work half way through the project?

                  In my opinion, using C# for this use case isn't a practical choice on a greenfield project.

                  • neonsunset 3 days ago

                    You may want to read the documentation first before responding.

                    • osigurdson 3 days ago

                      Where in the documentation does it state that everything just works (as it would in go)? I see a list of incompatibilities / limitations etc., that not only apply to your own code but any 3rd party library.

                      • neonsunset 3 days ago

                        Read on the trimming warnings (i.e. there are none? it means everything just works) and try out a couple projects :)

                        Very specific areas require reflection which is not analyzable with the main user being serialization and serialization just happens to be completely solved.

                      • whoknowsidont 3 days ago

                        The OP is a troll. He just claims everyone else is out to get him and .NET if they point out some inconvenient non-ideal on display in the ecosystem.

                        Every tooling has its faults.

        • Cthulhu_ 4 days ago

          I'm happy to see it to be honest; at this point C# is 25 years old, and since then there's been a lot of innovation and development in programming languages and ecosystems (as well as 10x more software developers, at least - I am guessing at this number). Current-day programming languages, including Go and Typescript, will have had a lot of learnings from C#, including in things like generics and the like.

          • neonsunset 4 days ago

            Go is a step back in key areas: generics, nullability, functional constructs, concurrency. The praise of the latter in particular is egregious. Surely we can do better than having to wire the transfer of data and forking/joining the logical flows manually in 2025? The concept of virtual threading in Go, despite having nice implementation, did not progress much from what we’ve had 10 or even 20 years ago either. It can be more convenient than async/await with hot-started tasks/futures if you predominantly are dealing with sequential code. But for highly concurrent logic working with goroutines is more ceremonious.

            • commandersaki 3 days ago

              Communicating Sequential Processes (CSP) makes it a lot easier to reason about concurrency. It may be old (1978) but is foundational.

            • desumeku 3 days ago

              "Go is not meant to innovate programming theory. It’s meant to innovate programming practice.”

  • triceratops 4 days ago

    A Microsoft project led by the chief architect of C#, no less.

  • _bin_ 4 days ago

    same reason i hate gradle/maven/ant: shipping a big runtime that many devs won't have installed for a build tool is bad. even with AOT, you still need a dotnet runtime.

DeathArrow 4 days ago

The news for me is Microsoft teams relying on Go.

Strange choice to use Go for the compiler instead of C# or F#.

Now if they will have problems, they will depend on the Go team at Google to fix them.

  • pjmlp 3 days ago

    Actually, they have their own Go compiler.

    https://devblogs.microsoft.com/go/

    Just like they have their own Java distribution, after everything that caused C# to exist in first place,

    https://devblogs.microsoft.com/java/

    Yes, the new DevDiv is not like the Microsoft of old.

    But then the .NET team shouldn't be asking every now and then on social media, why other languages get chosen, outside the Windows ecosytem.

  • Cthulhu_ 4 days ago

    That's not how open source works / should work though. If the Go maintainers (that happen to work at Go) ignore problems that the MS teams flag up, they can fork the language.

    The opposite would be true as well, teams at Google using Typescript or C# would rely on Microsoft to fix any issues.

  • leosanchez 4 days ago

    > Now if they will have problems, they will depend on the Go team at Google to fix them.

    Or collaborate with Go team.

  • whoknowsidont 3 days ago

    >Now if they will have problems, they will depend on the Go team at Google to fix them.

    MS literally already has a whole team around Go. And if they didn't, Go is completely open source.

    C# is open-source in name only.

  • commandersaki 3 days ago

    I know a few teams in Azure Networking teams that use Go. I don't think it's uncommon.

stuaxo 4 days ago

I'd like to see if it makes a difference to the version of DOOM that runs in the TypeScript type system.

https://news.ycombinator.com/item?id=43184291

https://www.youtube.com/watch?v=0mCsluv5FXA

  • dimitropoulos 4 days ago

    hi! author of the Doom thing, here. while I won't be the one to try, my answer is "absolutely yes, it will make a massive difference". Sub-1-day Doom-first-frame is probably a possibility now, if not much more because actually the thing that was the largest bottleneck for Doom-in-TypeScript-types was serializing the type to a string, which may well be considerably more than 10x faster. Hopefully someone will try some day!

    • Cthulhu_ 4 days ago

      > Sub-1-day Doom-first-frame

      Love it :D

  • dlahoda 4 days ago

    i guess they started rewrite exactly because of doom performance. timelines match.

  • jack4818 4 days ago

    Haha this was my first thought too

robinsonrc 4 days ago

Sad to see them using Go and not Anders’s own language (Turbo Pascal 7) for this

  • whattidywhat 3 days ago

    There should be an award for this comment.

ethan_smith 4 days ago

Wow, this is huge! A 10x speedup is going to be game-changing for large TypeScript codebases like ours. I've been waiting for something like this - my team's project takes forever to typecheck on CI and slows down our IDE.

Hopefully this would also reduce the memory footprint because my VS Code intelisense keeps crashing unless I give it like 70% of my RAM, its probably because of our fairly large graphql.ts file which contains auto-generated grapqhl types.

hamandcheese 4 days ago

I love all this native tooling for JS making things faster.

I kinda wonder, though, if in 5 or 10 years how many of these tools will still be crazy fast. Hopefully all of them! But I also would not be surprised if this new performance headroom is eaten away over time until things become only just bearable again (which is how I would describe the current performance of typescript).

  • unilynx 4 days ago

    Even if they freeze typescript development after the native implementation, given that the current performance was apparently acceptable to the current users, type complexity will just grow to use up the headroom

    Plus, using TS directly to do runtime validation of types will become a lot more viable without having to precompile anything. Not only serverside, we'll compile the whole thing to WASM and ship it to the client to do our runtime validation there.

tracker1 4 days ago

Given the direction and efforts into projects like rspack, rolldown, etc. Why were they not considered as possible collaboration projects or integrations for this?

This isn't a knock against Go or necessarily a promotion of Rust, just seems like a lot of duplicated effort. I don't know the timelines in place or where the community projects were vs. the internal MS project.

cjbgkagh 4 days ago

My read on why Go and not AOT C# is it would be more difficult to get a C# programmers to give up idiomatic OOP in C# than it would be to get C# programmers to switch to Go. Go is being used as a forcing function to push dev cultural change. This wouldn't generalize to teams that have other ways of dealing with cultural change.

eapriv 4 days ago

It’s not obvious from the text, but the compiler was previously written in TypeScript (which was kind of a strange choice for the language to write a compiler in).

  • akmittal 4 days ago

    TypeScript compiler is more of a transpiler, not a typical compiler that creates a binary. I don't think it was weird choice.

  • pjmlp 4 days ago

    Bootstraping compilers is a common activity and TypeScript is a nice language.

    • eapriv 4 days ago

      “Nice” doesn’t mean “well suitable for writing a compiler in”. It’s strange to think that all languages should be equally good for writing all kinds of things, and choosing a web language for a non-web task is doubly strange.

    • nailer 4 days ago

      Yep. I remember years ago when they celebrating getting the C# compiler working in C#.

      • atonse 4 days ago

        That was the Roslyn project! Yes they were also excited that it would allow more devs to hook into the computer and also enhance it.

  • styfle 4 days ago

    Its not strange, its very common. Its called "bootstrapping".

    > Bootstrapping is a fairly common practice when creating a programming language. Many compilers for many programming languages are bootstrapped, including compilers for ALGOL, BASIC, C, C#, Common Lisp, D, Eiffel, Elixir, Go, Haskell, Java, Modula-2, Nim, Oberon, OCaml, Pascal, PL/I, Python, Rust, Scala, Scheme, TypeScript, Vala, Zig and more.

    https://en.wikipedia.org/wiki/Bootstrapping_(compilers)

    • eapriv 4 days ago

      Yet people wouldn’t write a Fortran compiler in Fortran, or a MATLAB compiler in MATLAB.

  • uncenter 4 days ago

    is it not common to write compilers for languages in the language being compiled itself? rust does this i think?

    • steveklabnik 4 days ago

      It is fairly common, yes. Sometimes those compilers (or interpreters) aren't the primary implementation, but it's certainly a thing that happens often.

      Most of the Rust compiler is in Rust, that's correct, but it does by default use LLVM to do code generation, which is in C++.

    • uncenter 4 days ago

      note that as others have said, "compiled" is a stretch, but nevertheless...

      • kazinator 4 days ago

        Programs that are less than full compilers in some sense can be bootstrapped.

        For instance, this compiler for a pattern matching notation has parts of it implementation using the notation itself:

        https://www.kylheku.com/cgit/txr/tree/stdlib/match.tl

        Some pattern matching occurs in the function match-case-to-casequal. This is why it is preceded by a dummy implementation of non-triv-pat-p, a function needed by the pattern matching logic for classifying whether a pattern is trivial or not; it has to be defined so that the if-match and other macros in the following function can expand. The sub just says every pattern is nontrivial, a conservative guess.

        non-triv-pat-p is later redefined. And it uses match-case! So the pattern matcher has bootstrapped this function: a fundamental pattern classification function in the pattern matcher is written using pattern matching. Because of the way the file is staged, with the stub initial implementation of that function, this is all boostrapped in a single pass.

joewood1972 4 days ago

One question that springs to mind is the in-browser "playground" and hosted coding use-case. I assume WASM will be used in that scenario. I'm wondering what the overhead is there.

  • ivanjermakov 4 days ago

    Main overhead is shipping Go's WASM runtime to the client

goda90 4 days ago

This will be very welcome. I've been working on refactoring very large Typescript files in a very large solution in VS2022. Sometimes it gets into a state where just editing the code or copy/pasting causes it to hang for a few seconds and the fans on my workstation to take off like a jet engine. The typing advantages my team has gotten from migrating our codebase to Typescript has been invaluable, but the performance implications really hurt.

  • Cthulhu_ 3 days ago

    It's the same with formatting and linting, I've heard some people mention long delays caused by eslint / prettier. Biome is faster for formatting, but the eslint plugin ecosystem is still too important for us to switch to Biome for linting as well.

whoknowsidont 3 days ago

I'm glad the team was able to pick something because it was a good fit for them and their goals, and not because of perception or love of some tech-stack.

Programming languages are tools. Nothing more.

CLiED 4 days ago

Few things are more Microsofty than a team reaching over to a competitor's language instead of using their own and to boot none of the reasons given so far seem credible, good job to the team nonetheless.

  • Cthulhu_ 3 days ago

    I think it's a mature decision, besides, Go is an open source project, calling it "a competitor's language" is a bit derisive. The developers behind Go, Typescript, C#, etc were designing languages well before they were hired by those companies, I don't think they consider their languages a "google" or "microsoft" specific language per se.

  • sesm 4 days ago

    Totally agree about reasons, they have some hidden agenda behind this decision that they don't want to disclose. Rewriting in native code allows step-by-step rewrite using JS runtime with native extensions, but moving to a different VM mandates big rewrite.

    My most plausible guess would be that compiler writers don't want to dig into native code and performance, writing a TS to Go translator looks like a more familiar task for them. Lack of JS version performance analysis anywhere in the announcements kinda confirms this.

zombot 3 days ago

No, it's not TypeScript that is 10 times as fast, only the TypeScript compiler. Bad title. Also, "10x faster" would be a factor of 11.

synergy20 4 days ago

In Golang, wow. That gives me more confidence to adopt Go in projects.

maginx 2 days ago

I'm curious about the 10x via implementation in Go - couldn't it have been realized otherwise? Finding the hotspots, reimplementing them using better algorithms, if necessary move a few critical paths to native etc. Or even improving the JIT itself which might benefit all programs. Just wondering because I wouldn't think that the JIT-overhead was that much that you could gain 10x just reimplementing in Go (or C, assembly etc)... that is something I would only have expected if going from an interpreted context.

  • tefkah 2 days ago

    hjalsberg has explained this in some interviews. roughly 3x speed up from going native, another 3-4x speed up from being able to actually do effective multi threading

presentation 4 days ago

Ive been dreaming about this for years! Never been so pumped.

DanielHB 4 days ago

Any plans for a AOT version of Typescript with strict typing that targets WASM or LLVM?

  • spankalee 4 days ago

    If you squint, Porffor[1] might end up being something like that.

    It doesn't use type hints yet, and the difficulty there is that you'd need a sound type system in order to rely on the types. You may be able to use type hints to generate optimized and fallback functions, with type guards, but that doesn't exist yet and it sounds like the TypeScript team wants to move pretty quickly with this.

    [1]: https://porffor.dev/

  • Ciantic 4 days ago

    This is what I would have liked too: Figure out a sufficient subset of TypeScript that can be compiled to native/WASM and then write TSC in that subset.

    While I like faster TSC, I don't like that the TypeScript compiler needs to be written in another language to achieve speed; it kind of reminds everyone that TS isn't a good language for complicated CPU/IO tasks.

    Given that the TypeScript team has resigned to the fact that JavaScript engines can't run the TypeScript compiler (TSC) sufficiently fast for foreseeable future and are rewriting it entirely in Go, then it is unlikely they will seek to do AOT.

odyssey7 4 days ago

The key:

> immutable data structures --> "we are fully concurrent, because these are what I often call embarrassingly parallelizable problems"

The relationship of their performance gains to functional programming ideas is explained beginning at 8:14 https://youtu.be/pNlq-EVld70?feature=shared&t=522

trashface 4 days ago

I can see why they didn't use Rust, I've written little languages in that myself, so I know what is involved, even though I like the language a lot. But I'm quite surprised they didn't use C#. I would have thought ahead-of-time optimized C# would give nearly the same compilation speed as Go. They do seem to be leaning into concurrency a lot so maybe its more about Go's implementation of that (CSP-like), but doesn't .Net have a near-equivalent to that? Have not used it in a while.

Also I get the sense from the video that it still outputs only JS. It would be nice if we could build typescript executables that didn't require that, even if was just WASM, though that is more of a different backend rather than a different compiler.

Edit: C# was addressed: https://github.com/microsoft/typescript-go/discussions/411#d...

register 4 days ago

I really wonder why this project have not been developed in .NET core. I would have then been possible to embed this in .NET projects increasing the available number of libraries in the ecosystem. Also it woul have leverages .NET GC which is better than Go. Rewriting in Go really doesn't make sense to me.

paxys 4 days ago

Faster compilation is great, but what I'm really excited for is a faster TS Language Server. Being able to get autocomplete hints, hover info, goto definition, error squiggles and more anything close to 10x faster is going to be revolutionary when working in large TS codebases.

gwbas1c 4 days ago

This is frustrating:

> The JS-based codebase will continue development into the 6.x series, and TypeScript 6.0 will introduce some deprecations and breaking changes to align with the upcoming native codebase.

> While some projects may be able to switch to TypeScript 7 upon release, others may depend on certain API features, legacy configurations, or other constraints that necessitate using TypeScript 6. Recognizing TypeScript’s critical role in the JS development ecosystem, we’ll still be maintaining the JS codebase in the 6.x line until TypeScript 7+ reaches sufficient maturity and adoption.

It sounds like the Python 2 -> 3 migration, or the .Net Framework 4 -> .Net 5 (.Net Core) migration.

I'm still in a multi-year project to upgrade past .Net Framework 4; so I can certainly empathize with anyone who gets stuck on TS 6 for an extended period of time.

  • lenkite 4 days ago

    Better a language that deprecates and breaks things at regular intervals of time compared to a language that has Forever Backward Compatibility like C++ and evolves into a mutated, tentacled monster that strangles developers who are trying to maintain a project.

  • kansface 4 days ago

    I lived and worked through the Python 2->3 fiasco, working on a Python library that had to run on both versions. I have since abandoned the language. Python3 was both slower and and not backwards compatible whereas TSC 7 is 10x faster and uses half the memory. I'm not worried.

  • ricardobeat 4 days ago

    This is mostly about the tooling and ecosystem, they want to stop things from depending on the internal workings of the compiler. If you just want to write and compile TS you'll be fine, it does not mean breaking changes to actual TypeScript grammar.

  • brokencode 4 days ago

    Yeah, this is not ideal. I’m hoping that the breaking changes don’t affect the code at my work, since we also had to spend multiple years on a major .NET Core transition. I want the faster compiles right away, not in a few years.

melodyogonna 4 days ago

Oh man, this is great. I've been having performance issues with TSC for language services.

My theory - that Go will always be the choice for things like this when ease, simplicity, and good (but not absolute) performance is the goal - continues to hold.

alberth 4 days ago

Dumb question: is this a 10x speed up in the run-time of TypeScript ... or just the build tooling?

And if it's run-time, can we expect browsers to replace V8 with this Go library?

(I realize this is a noob/naive question - apologies)

  • jakebailey 4 days ago

    This is specifically about the performance of the TypeScript toolchain (compiler, editor experience); the runtime code generated is the same. TypeScript is just JS with types.

  • Tadpole9181 4 days ago

    There is no Typescript runtime, it's just a transpiler.

  • LVB 4 days ago

    Just building (and supporting features like LSP)

umvi 4 days ago

This is great news. We actually use esbuild most of the time to transpile TS files because tsc is so slow (and only run tsc in CI/CD pipelines). Coincidentally, esbuild is also golang

stef-13013 3 days ago

The main problem is that AOT C# is not as mature on platforms other than Windows, if that.

So, to me, Hejlsberg's choice sounds pretty logical.

After, why go ? why not...

skwee357 4 days ago

“Developers rewrite tools from dynamic language to statically compiled one - improves performance by 10x”

Also, what’s up with 10x everywhere? Why not 9.5x or 11x?

  • Cthulhu_ 3 days ago

    If you want to be pedantic, the benchmarks they ran showed speedups of 10.4x, 10.1x, 13.5x, 9.5x, 9.1x and 11.0x, for an average of 8.95x.

nailer 4 days ago

Keep in mind most apps made in frameworks aren't using `tsc` but rather existing tools like `esbuild` which are native binaries.

  • sethaurus 4 days ago

    That's true of the compilation step, but type-checking always uses `tsc`. They're no TS spec, so it's very hard to build a fully-compatible competing implementation. Erasing the types syntactically is a lot easier.

vivzkestrel 4 days ago

Has there been any talks/progress on native inclusion of typescript for type checking, for path resolution with node.js without using tsc, ts-node, tsx, native vscode TS debugging and testing support? We are 22 versions down on node.js and still the support seems to be limited at best. Is it possible to maybe share a roadmap of what is being done in this territory

NiloCK 3 days ago

One outsized impact of this is going to be on agentic LLM programming workflows, where compile and test time are trending toward being the dominant bottlenecks.

See how many spaghetti types get churned through this faster transpiler.

Didn't expect Jevons paradox popping up for compilers.

aiiizzz 4 days ago

So is the language server still not going to match lsp spec? Even though it's getting a complete rewrite?

jujadjwdfs 4 days ago

Not sure if this point was brought up but I think it's worth considering.

If the Typescript team were to go with Rust or C# they would have to contend with async/await decoration and worry about starvation and monopolization.

Go frees the developer from worrying about these concerns.

  • neonsunset 4 days ago

    Go is more vulnerable to thread starvation when you go across interop. If you do not, it has better scheduling fairness but is less efficient at firing off new short-lived goroutines than .NET is at tasks.

localghost3000 4 days ago

Something that kind of got understated in here IMO is the improved refactoring and code intelligence that this will unlock. Very exciting! I am looking forward to all the new tooling and frameworks that come out of this change. TS is already an amazing language and just keeps getting better!

  • ilrwbwrkhv 4 days ago

    From the post:

    > Modern editors like Visual Studio and Visual Studio Code have excellent performance.

    Well I am not sure we are on the same page here. Still, fingers crossed.

sublinear 4 days ago

Typescript compiles to javascript, so does this not prove what people have been screaming from the rooftops for so long that there's a significant performance penalty with typescript for almost no actual benefit?

  • danielheath 4 days ago

    > a significant performance penalty with typescript

    There's a significant performance penalty for using javascript outside the browser.

    I'm not aware of any JS runtime outside a browser that supports concurrency (other than concurrently awaiting IO), so you can't do parallel compilation in a single process.

    It's generally also very difficult to make a JS program as fast as even a naive go program, and the performance tooling for go is dramatically more mature.

    • sapiogram 3 days ago

      > I'm not aware of any JS runtime outside a browser that supports concurrency (other than concurrently awaiting IO), so you can't do parallel compilation in a single process.

      You haven't looked very hard then, NodeJS has supported worker threads for years. However, to uphold Javascript's safety guarantees, they can only communicate via message passing, or sharing a special `SharedArrayBuffer` datatype, neither of which are well suited to sharing large immutable data structures.

  • Cthulhu_ 3 days ago

    Nope; 'compiling to javascript' is a relatively trivial operation, just remove the type information. This is what Babel and nowadays NodeJS itself are doing.

    What is more important is that tsc does typechecking, which is a static analysis of sorts to ensure code correctness. But this has nothing to do with runtime performance, that's entirely in JS land and in JS transpilers / optimizers.

  • crabmusket 3 days ago

    I don't think anyone uses JavaScript for speed. Dynamic or scripting languages have been making this tradeoff since... as long as there has been programming?

  • salmonellaeater 4 days ago

    No.

    You seem to be referring to runtime performance of compiled code. The announcement is about compile times; it's about the performance of the compiler itself.

wodenokoto 4 days ago

So the end goal is that I can write a typescript application and deploy an executable to my server? Or is it just to deliver faster versions of typescript tools and MS developed typescript applications?

tomatofrank 4 days ago

Very pumped to see how this improves the experience in VSCode.

I've been revisiting my editing setup over the last 6 months and to my surprise I've time traveled back to 2012 and am once again really enjoying Sublime Text. It's still by far the most performant editor out there, on account of the custom UI toolkit and all the incredibly fast indexing/search/editing engines (everything's native).

Not sure how this announcement impacts VSCode's UI being powered by Electron, but having the indexing/search/editing engines implemented in Go should drastically improve my experience. The editor will never be as fast as Sublime but if they can make it fast enough to where I don't notice the indexing/search/editing lag in large projects/files, I'd probably switch back.

  • crabmusket 4 days ago

    > Not sure how this announcement impacts VSCode's UI being powered by Electron

    It has no bearing on this at all.

  • efields 4 days ago

    Sublime Text has been my main since at least then as well. I can _see_ the lag in VSCode.

srott 4 days ago

Funny, until now I always thought that TypeScript is JavaScript with some C# vibes

https://news.ycombinator.com/item?id=43320086

  • Cthulhu_ 3 days ago

    They're by the same guy, so that tracks. C# did a lot of groundwork for Typescript and other newer language's type systems, like Java did for C#.

dimitropoulos 4 days ago

yes, this will definitely vastly increase the Doom fps, haha (I’m the guy that did that project). But I think there’s a lot more to it than that.

tl;dr — Rust would be great for a rewrite, but Go makes way more sense for a port. After the dust settles, I hope people focus on the outcomes, not the language choice.

I was very surprised to see that the TypeScript team didn’t choose Rust, not just because it seemed like an obvious technical choice but because the whole ecosystem is clearly converging on Rust _right now_ and has been for a while. I write Rust for my day job and I absolutely love Rust. TypeScript will always have such a special place in my heart but for years now, when I can use Rust.. I use Rust. But it makes a lot of sense to pick Go.

The key “reading between the lines” from the announcement is that they’re doing a port not a rewrite. That’s a very big difference on a complex project with 100-man-years poured into it.

Places where Go is a better fit than Rust when porting JavaScript:

- Go, like JavaScript and unlike Rust, is garbage collected. The TypeScript compiler relies on garbage collection in multiple places, and there are probably more that do but no one realizes it. It would be dangerous and very risky to attempt to unwind all of that. If it were a Rust rewrite, this problem goes away, but they’re not doing a rewrite.

- Rust is so stupidly hard. I repeat, I love Rust. Love it. But damn. Sometimes it feels like the Rust language actively makes decisions that demolish the DX of the 99.99% use-case if there’s a 0.001% use-case that would be slightly more correct. Go is such a dream compared to Rust in this respect. I know people that more-or-less learned Go in a weekend and are writing it professionally daily. I also know people that have been writing Rust every day professionally for years and say they still feel like noobs. It’s undeniable what a difference this makes on productivity for some teams.

Places where Go is just as good a fit as Rust:

- Go and Rust both have great parallelism/concurrency support. Go supports both shared memory (with explicit synchronization) and message-passing concurrency (via goroutines & channels). In JavaScript, multi-threading requires IPC with WebWorkers, making Go’s concurrency model a smoother fit for porting a JS-heavy codebase that assumes implicit shared state. Rust enforces strict ownership rules that disallows shared state, or we can at least say makes it a lot harder (by design, admittedly).

- Go and Rust both have great tooling. Sure, there are so many Rust JavaScript tools, but esbuild definitively proves that Go tooling can work. Heck, the TypeScript project itself uses esbuild today.

- Go and Rust are both memory safe.

- Go and Rust have lots of “zero (or near zero) cost abstractions” in their language surface. The current TypeScript compiler codebase makes great use of TypeScript enums for bit fiddling and packing boolean flags into a single int32. It sucks to deal with (especially with a Node debugger attached to the TypeScript typechecker). While Go structs are not literally zero cost, they’re going to be SO MUCH nicer than JavaScript objects for a use-case like this that’s so common in the current codebase. I think Rust sorta wins when it comes to plentiful abstractions, but Go has more than enough to make a huge impact.

Places where Rust wins:

- the Rust type system. no contest. In fairness, Go doesn’t try to have a fancy type system. It makes up for a lot of the DX I complained about above. When you get an error that something won’t compile, but only when targeting Windows because Rust understands the difference in file permissions… wow. But clearly, what Go has is good enough.

- so many new tools (basically, all of them that are not also in JS) are being done in Rust now. The alignment on this would have been cool. But hey, maybe this will force the bindings to be high-quality which benefits lots of other languages too (Zig type emitter, anyone?!).

By this time next week when the shock wears off, I just really hope what people focus on is that our TypeScript type checking is about to get 10 times faster. That’s such a big deal. I can’t even put it into words. I hope the TypeScript team is ready to be bombarded by people trying to use this TODAY despite them saying it’s just a preview, because there are some companies that are absolutely desperate to improve their editor perf and un-bottleneck their CI. I hope people recognize what a big move this is by the TypeScript team to set the project up for success for the next dozen years. Fully ejecting from being a self-hosted language is a BIG and unprecedented move!

  • tialaramex 4 days ago

    A tiny thing that's not relevant to this particular piece of work but is worth having in background when thinking about Go is that while Go would like Python typically be described as "memory safe" unlike Java (or more remarkably, Rust) it is very possible for naive programmers to cause undefined behaviour in this language without realising it.

    Specifically if you race any non-trivial Go object (say, a hash table, or a string) then that's immediately UB. Internally what's happening is that these objects have internal consistency rules which you can easily break this way and they're not protected against that because the trivial way to do so is expensive. Writing a Go data race isn't as trivial as writing a use-after-free in C++ but it's not actually difficult to do by mistake.

    In single threaded software this is no caveat at all, but most large software these days does have some threading involved.

    • commandersaki 3 days ago

      I'm confused what you ascribe to as undefined behaviour. What does that mean in the Go context? There is no mention of what UB is in Go at https://go.dev/ref/spec .

      • tialaramex 3 days ago

        With a race of a non-trivial object in Go you're violating assumptions of Go's language runtime. Physically impossible things can't happen because that's what physically impossible means, but stuff which is merely written down in a document like the one you linked is fair game for such problems.

        In terms of concrete examples, this might allow remote code execution, arbitrary reads or writes of memory that you otherwise don't have access to, stuff like that.

  • pansa2 4 days ago

    > I was very surprised to see that the TypeScript team didn’t choose Rust

    Typescript is a Microsoft project, right? I’m surprised they didn’t choose C#.

  • ChocolateGod 4 days ago

    imho Go is a far easier language to learn than Rust, so it lowers the barrier to entry for new contributors.

    • rickette 4 days ago

      Which is a massive pro for any open source project

      • frou_dh 4 days ago

        Some big projects have so many people trying to do PRs that it's actually a bit of a hassle to deal with them all. So I don't think maximising the number of contributors should necessarily be one of the top goals for projects that are already big or have guaranteed relevance.

    • libria 4 days ago

      Is learning a language even a thing anymore with $Internal_or_external_LLM_helper plugin available for every IDE? I haven't found syntax lookups to be that much a concern anymore and any boneheaded LLM suggestions are trivial to detect/fix.

      • ChocolateGod 4 days ago

        You still need to know the language it generates, otherwise you're generating gobbldy gook

  • tyilo 4 days ago

    > Go and Rust are both memory safe.

    Go doesn't seem to be memory safe, see https://www.reddit.com/r/rust/comments/wbejky/comment/ii7ak8... and https://go.dev/play/p/3PBAfWkSue3

    • tptacek 4 days ago

      "Memory safety" is a term of art meaning susceptibility to memory corruption attacks. They had to come up with some name for it; that's the name they came up with. This is a perennial tangent in conversations among technologists: give something a legible name, and people will try to axiomatically (re)define it.

      Rust is memory safe. Go is memory safe. Python is memory safe. Typescript is memory safe. C++ is not memory safe. C is not memory safe.

    • dimitropoulos 4 days ago

      I love Rust, but you can play exactly the same game with Rust: https://github.com/Speykious/cve-rs

      • tialaramex 4 days ago

        I mean, no? That's basically a known bug in Rust's compiler, specifically it's a soundness hole in type checking, and you'd basically never write it by accident - go read the guts of it for yourself if you think you might accidentally do this.

        At some point a next generation solver will make this not compile, and people will probably invent an even weirder edge case for that solver.

        Whereas the Go example is just how Go works, that's not a bug that's by design, don't expect Go to give you thread safety that's not what they promised.

        • dimitropoulos 4 days ago

          thank you for the clarification. you're right. I guess I was just trying to say that it's a spectrum (even if Rust is very very far along the way towards not having any holes). I can't seem to find it but there's some Tony Hoare or maybe Alan Turing quote or something like that about the only 100% correct computer program to ever exist was the first one.

    • vessenes 4 days ago

      This is true in that if you pass pointers through go routines, you do not have guarantees about what’s at the end of that pointer. However, this is “by design” in that generally you shouldn’t do that; the overhead the go memory model places on developers is to remember what’s passed as value and what’s passed as a pointer, and act accordingly. The rest it takes care of for you.

      The burden placed by rust on the developer is to keep track of all possible mutability and readability states and commit to them upfront during development. (If I may summarize, been a long time since I wrote any Rust). The rest it takes care of for you.

      The question of which a developer prefers at a certain skill level, and which a manager of developers at a certain skill level prefers, is going to vary.

    • jchw 4 days ago

      That is not a violation of memory safety, that's a violation of concurrency safety, which Go doesn't promise (and of course, Rust does.)

      • steveklabnik 4 days ago

        Segfaults are very much a memory safety issue. You are correct that concurrency is the cause here, but that doesn't mean it's not a memory safety issue.

        That said, most people still call Go memory safe even in spite of this being possible, because, well, https://go.dev/ref/mem

        > While programmers should write Go programs without data races, there are limitations to what a Go implementation can do in response to a data race. An implementation may always react to a data race by reporting the race and terminating the program. Otherwise, each read of a single-word-sized or sub-word-sized memory location must observe a value actually written to that location (perhaps by a concurrent executing goroutine) and not yet overwritten. These implementation constraints make Go more like Java or JavaScript, in that most races have a limited number of outcomes, and less like C and C++, where the meaning of any program with a race is entirely undefined, and the compiler may do anything at all.

        That last sentence is the most important part. Java in particular specifically defines that tears may happen in a similar fashion, see 17.6 and 17.7 of https://docs.oracle.com/javase/specs/jls/se8/html/jls-17.htm...

        I believe that most JVMs implement dynamic dispatch in a similar manner to C++, that is, classes are on the heap, and have a vtable pointer inside of them. Whereas Go's interfaces can work like Rust's trait objects, where they're a pair of (data pointer, vtable pointer). So the behavior we see here with Go is unlikely to be possible in Java, because the tear wouldn't corrupt the vtable pointer, because it's inside what's pointed at by the initial pointer, rather than being right after it in memory.

        These bugs do happen, but they have a more limited blast radius than ones in languages that are clearly unsafe, and so it feels wrong to lump Go in with them even though in some strict sense you may want to categorize it the other way.

        • jchw 4 days ago

          Sure, that's all true. It does limit Go's memory safety guarantees. However, I still believe that just because Java and other languages can give better guarantees around the blast radius of concurrency bugs does not mean that Go's definition of memory safety is invalid. I believe you can justifiably call Go memory-safe with unsafe concurrency. This may give people the wrong idea about where exactly Go fits in on the spectrum of "safe" coding (since, like you mentioned, some languages have unsafe concurrency that is still safer,) but it's not like it's that far off.

          On the other hand, though, in practice, I've wound up using Go in production quite a lot, and these bugs are excessively rare. And I don't mean concurrency bugs: Go's concurrency facilities kind of suck, so those are certainly not excessively rare, even if they're less common than I would have expected. However... not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.

          So how severely you treat this lapse is going to come down to taste. I see the appeal of Rust's iron-clad guarantees around limiting the blast radius, but of course everything comes with limitations. I believe that any discussion about the limitations of guarantees like these should have some emphasis on the real impact. e.g. It's easy enough to see that the issues with memory management in C and C++ are serious based on the security track record of programs written in C and C++, I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.

          • steveklabnik 4 days ago

            > On the other hand, though, in practice, I've wound up using Go in production quite a lot, and these bugs are excessively rare.

            I both want to agree with this, but also point to things like https://www.uber.com/en-CA/blog/data-race-patterns-in-go/, which found a bunch of bugs. They don't really contextualize it in terms of other kinds of bugs, so it's really hard to say from just this how rare they actually are. One of the insidious parts of non-segfaulting data race bugs is that you may not notice them until you do, so they're easy to under-report. Hence the checker used in the above study.

            > not all Go concurrency bugs can possibly segfault. I'd argue most of them can't, at least not on most common platforms.

            For sure, absolutely. And I do think that's meaningful and important.

            > I think we're still yet to fully understand how much of an impact Go's lack of safe concurrency will impact Go software in the long run.

            Yep, and I do suspect it'll be closer to Java than to C.

            • jchw 4 days ago

              The Uber page does a pretty good job of summing it up. The only thing I'd add is that there has been a little bit of effort to reduce footguns since they've posted this article; as one example, the issue with accidentally capturing range for variables is now fixed in the language[1]. On top of having a built-in (runtime) race detector since 1.1 and runtime concurrent map access detection since 1.6, Go is also adding more tools to make testing concurrent code easier, which should also help ensure potentially racy code is at least tested[2] (ideally, with the race detector on.) Accidentally capturing named return values is now caught by a popular linting tool[3]. There is also gVisor's checklocks analyzer, which, with the help of annotations, can catch many misuses of mutexes and data protected by mutexes[4]. (This would be a lot nicer as a language feature, but oh well.)

              I don't know if I'd evangelize for adopting Go on the scale that Uber has: I think Go works best for shared-nothing architectures and gets gradually less compelling as you dig into more complex concurrency. That said, since Uber is an early adopter, there is a decent chance that what they have learned will help future organizations avoid repeating some of the same issues, via improvements to tooling and the language.

              [1]: https://go.dev/blog/loopvar-preview

              [2]: https://go.dev/blog/synctest

              [3]: https://github.com/mgechev/revive/blob/HEAD/RULES_DESCRIPTIO...

              [4]: https://pkg.go.dev/gvisor.dev/gvisor/tools/checklocks

              • steveklabnik 4 days ago

                Ah, that's great info, thank you :)

        • commandersaki 3 days ago

          > Segfaults are very much a memory safety issue.

          How can a segfault lead to attack or exploitation?

          Edit: Answering my own question (from https://go.dev/ref/mem):

          Reads of memory locations larger than a single machine word are encouraged but not required to meet the same semantics as word-sized memory locations, observing a single allowed write w. For performance reasons, implementations may instead treat larger operations as a set of individual machine-word-sized operations in an unspecified order. This means that races on multiword data structures can lead to inconsistent values not corresponding to a single write. When the values depend on the consistency of internal (pointer, length) or (pointer, type) pairs, as can be the case for interface values, maps, slices, and strings in most Go implementations, such races can in turn lead to arbitrary memory corruption.

          • jchw 3 days ago

            Not all segfaults necessarily point to exploitable bugs, but a segfault is usually very suspicious. On common architectures, you get a segmentation fault when there is a memory access violation. Which usually means you've either read from, written to, or tried to execute code at an address that is not readable, not writeable or not executable in your address space. That is suspicious because unless your program is intentionally doing that (which is relatively rare, and obviously in that case you would want to explicitly catch it with a signal handler) it suggests that some assumption your program is making about memory somewhere is incorrect. Like Go says, arbitrary memory corruption.

            Is that exploitable? It depends. It's easier to assume that it is than hope that it isn't.

            However, while it is a more serious category of issue, I have two reasons to suggest people don't over-index on it:

            - Concurrency bugs that can not lead to segmentation faults are by no means safe, they can still lead to exploits of arbitrary severity. Ones that can are more dangerous since they can violate Go's own safety guarantees, but so can the "unsafe" package, so you need to put it into some perspective.

            - Concurrency bugs that can are likely to be less common. In my experience, it is not extremely common to re-assign shared map or interface values in Go. If you are sharing a value of map, slice, string or interface and do plan on re-assigning it (thus causing the hazard in question) you can work around this problem trivially by adding a tiny bit of indirection, using an atomic pointer to the value instead, and re-assigning that pointer instead. Making a new value each time is no big deal since all of the fat pointers in question are still relatively small (just 2-3 machine words) though it incurs more allocations and pointer indirections so YMMV.

            And of course I recommend using all applicable linters, the checklocks analyzer from gVisor, and careful encapsulation of shared memory where possible. Even better is to avoid it entirely if you can.

            Of course, as much as I love Go, some types of program are going to need lots of hairy shared memory and mutations interweaving. And for that, Rust is the obvious best choice.

            • commandersaki 3 days ago

              Yeah, I was just thinking if an implementation has the propensity to abort or fail early with a segfault, that's better than running with memory corruption and far more difficult to exploit. It's not clear from the upthread example how soon it fails after corruption so there is potentially a narrow window where such a bug could be exploited if found in the wild with the apropos attack surface.

              • jchw 3 days ago

                Ah I see what you mean. To be fair, it is still true that not every bug that can lead to a segfault is exploitable, including this one potentially, but on the other hand, I think the point is that Go's memory safety guarantees always prevent segmentation faults: by the time you've hit a segmentation fault, you have definitely broken the type system and nothing is guaranteed anymore W.R.T. memory safety. So any bug that causes a segmentation fault is definitely immediately suspect. I think that's the point they were going for, at least.

  • troupo 4 days ago

    > The TypeScript compiler relies on garbage collection in multiple places

    What? And how? And how would that help in Go which has a completely different garbage collection mechanism?

    • tgv 4 days ago

      As in: there's no allocation/deallocation code. The code relies on garbage collection to function.

massive-fail 4 days ago

Typescript was the best thing that ever happened to the web! Thanks Daniel, Ryan and Anders and the rest of the team for making development great for over 10 years! This improvement is amazing!

  • nonethewiser 4 days ago

    >Typescript was the best thing that ever happened to the web!

    My development in regards to language:

    - Javascript sucks I love Python.

    - Python sucks I love Typescript.

dev1ycan 4 days ago

Typescript is a nice programming language, Javascript is not, I am glad

darthrupert 4 days ago

This kinda begs the question: should we port all backend Typescript code to Go (or Rust) to get a similar runtime performance improvement? Is Typescript generally this inefficient?

  • crabmusket 4 days ago

    You could profile it and find out.

    Another commenter pointed out that compilers have very different performance characteristics to games, and I'll include web servers in that too.

    tsc needs to start up fast and finish fast. There's not a ton of time to benefit from JIT.

    Your server on the other hand will run for how long between deployments?

  • airforce1 4 days ago

    If your backend is JS and it's too slow for you, then obviously porting it to a machine code binary will speed it up significantly. If you are happy with your backend performance, then does it matter?

accassar 4 days ago

What percentage of the new code was written by an LLM?

subarctic 4 days ago

One question I'm surprised isn't discussed here is how much AI code generation was used in this port. It seems like the perfect use case for it.

g0ld3nrati0 4 days ago

Will TS v7 only support erasable syntax? e.g. no enums?

  • progmetaldev 4 days ago

    TS v5.8 added the --erasableSyntaxOnly option, along with Node.js 23.6 so you can run your TS in Node, which will error on enums (as well as namespaces and other syntax). I haven't found anything that mentions the deprecation of enums when searching, TS v6 is supposed to be as feature compatible with v7 as possible, and since enums are not a type-level feature of JS I wouldn't rely on them.

    Right now you can make use of the --erasableSyntaxOnly to find any enums in your code, and start porting over to an alternative. This article lists alternatives if you're interested.

    https://exploringjs.com/tackling-ts/ch_enum-alternatives.htm...

garbagepatch 4 days ago

What do they mean by improving editor startup time? Does the editor (I assume vscode?) run the compiler as part of the startup? Why?

  • gavmor 4 days ago

    There are various ways to (de)couple the compiler to/from vscode, but it's definitely handy to have inline typechecking. Is this possible without running the compiler?

  • wiseowise 4 days ago

    How would they show syntax highlighting otherwise?

    • Cthulhu_ 3 days ago

      regexes, generally.

  • desumeku 3 days ago

    The language server, maybe?

dlahoda 3 days ago

imho main reason for go is big pool of engineers to hire who have read this book set https://compilerbook.com/

as i see next planned feature is macro in TS(joke, just because 3rd book is macro).

neycoda 4 days ago

Once people figure out how much faster their apps will be, they'll add enough features to slow it down again.

aklein 4 days ago

Can’t wait for a better TSC Doom framerate.

kopirgan 4 days ago

Interesting Microsoft using Golang for this!

Ericson2314 4 days ago

It should be in OCaml. This is why I think OCaml should be compilable to the Go runtime/ABI.

gqgs 4 days ago

I guess this helps explain why Microsoft has their own fork on the Go language.

Traubenfuchs 4 days ago

I wonder how much that would have helped the guy who implemented Doom in TS types only.

deskr 4 days ago

I'm sold.

I'll give Typescript yet another go. I really like it and wish I could use it. It's just that any project I start, inevitably the sourcemap chain will go wrong and I lose the ability to run the debugger in any meaningful way.

sesm 4 days ago

Kinda shows that there is no practical ML-family language with good concurrency support.

  • nipah 4 days ago

    Don't think so, he stated one of the most important reasons was code compatibility, not specifically a good concurrency support (but this was important, indeed). I think even the most functional languages would not be easily compatible with "functional typescript code" without hard modifications. But either way, there is space for innovation in the field, I'm yet to see a ML-family language with concurrency that is as "hands on" as Go is, it would be extremely interesting to see this happening.

pizlonator 4 days ago

Misleading title. TypeScript isn't getting 10x faster. The compiler is 10x faster.

  • tobyhinloopen 4 days ago

    TS is nothing but a compiler

    • cjbgkagh 4 days ago

      It compiles to JS, one possible read would be that TS compiles to JS which runs 10x faster due to optimizations that can be made.

      • alexanderchr 4 days ago

        would be some very wishful reading!

thund 4 days ago

Too bad they didn’t choose Rust, would have loved contributing (not picking up Go, sry)

  • dimitropoulos 4 days ago

    did you contribute to the current TypeScript codebase? (not intended snarky, just curious)

    • thund 4 days ago

      a couple of commits merged yrs back, things I stumbled on that I used as an excuse to learn more about internals

lxe 4 days ago

Why not just work with the SWC folks and get the Rust implementation mainlined?

  • spankalee 4 days ago

    They want exact backwards compatibility with the JS implementation, so they're doing a lit-by-line port.

reverseblade2 4 days ago

Just use fable and F# instead, your code transpiles to python and rust too

  • aaronmu 4 days ago

    For the small price of 10x slower tooling.

    I’ve been using F# full-time for 6 years now. And compiler/tooling gets painfully slow fast.

    Still wouldn’t trade it for anything else though.

falleng0d 4 days ago

I wonder how much faster DOOM will run on this

DrBenCarson 4 days ago

I get that the choice was well thought out, but it would have been nice to use the same language as most of the modern tools (Rust)

Do any other well-adopted tools in the ecosystem use Go?

  • homebrewer 4 days ago

    > other well-adopted tools in the ecosystem use Go

    esbuild is the most well-known/used project, probably beats all other native bundlers combined. I can't remember anything else off the top of my head.

    https://github.com/evanw/esbuild

zerr 4 days ago

Use browser and web for websites, not applications. For apps, create native downloadable desktop software, which also work offline.

  • kridsdale1 4 days ago

    I work at Google (the original and worst offender of this) and I advocate for native binaries all the time.

  • ggregoire 4 days ago

    I prefer having all my apps in the browser.

  • butshouldyou 4 days ago

    Funnily enough, that's exactly what they're doing in this announcement. They're rewriting `tsc` in Go and shipping native binaries, rather than shipping JS.

    • zerr 4 days ago

      Didn't quite get, they compile ts to js using the compiler now written in Go, right? But we as end users still get js, not a native app.

emcell 4 days ago

this is huge! thank you!

algorithmsRcool 4 days ago

I am actually shocked that Anders chose Go over C# for this port.

_benton 4 days ago

> ctrl-f "rust" > 93 matches

sigh

singularity2001 4 days ago

So now tsc is a binary, browsers can efficiently bundle it and compile index.ts on the fly ... please?

arrty88 4 days ago

Now make it executable in the go runtime please:)

pseudopersonal 4 days ago

The post title is a bit misleading. It should say a 10x faster build time, or a 10x faster TypeScript compiler. tsc (compiler) is 10x faster, but not the final TS program runtime. Still an amazing feat! But doom will not run faster

"To meet those goals, we’ve begun work on a native port of the TypeScript compiler and tools. The native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage."

  • kaoD 4 days ago

    To clarify why it's actually not that ambiguous: TS is not (and does not have) a runtime at all. Even TS-first runtimes like Deno are (1) not TS but its own thing and most importantly (2) just JS engines with a frontend layer that treats TS as a first-class citizen (in Deno's case, V8).

    It's hard to tell if there will even be a runtime that somehow uses TS types to optimize even further (e.g. by proving that a function diverges) but to my knowledge they currently don't and I don't think there's any in the works (or if that's even possible while maintaining runtime soundness, considering you can "lie" to TS by casting to `unknown` and then back to any other type).

    • dec0dedab0de 4 days ago

      “faster typescript” would also be a valid way to say the typescript compiler found a way to automatically write more performant javascript.

      Just like if you said faster C++ that could mean the compiler runs faster, or the resulting machine code runs faster.

      Just because the compile target is another human readable language doesn’t mean it ceases to be a typescript program.

      I didn’t think this particular example was very ambiguous because a general 10x speed up in the resulting JS would be insane, and I have used typescript enough to wish the compiler was faster. Though if we’re being pedantic, which I enjoy doing sometimes, I would say it is ambiguous.

      • jakelazaroff 4 days ago

        > “faster typescript” would also be a valid way to say the typescript compiler found a way to automatically write more performant javascript.

        That still wouldn't make sense, in the same way that it wouldn't make sense to say "Python type hints found a way to automatically write more performant Python". With few exceptions, the TypeScript compiler doesn't have any runtime impact at all — it simply removes the type annotations, leaving behind valid JavaScript that already existed as source code. In fact, avoiding runtime impact is an explicit design goal of TypeScript [1].

        They've even begun to chip away at the exceptions with the `erasableSyntaxOnly` flag [2], which disables features like enums that do emit code with runtime semantics.

        [1] https://github.com/microsoft/TypeScript/wiki/TypeScript-Desi...

        [2] https://www.typescriptlang.org/docs/handbook/release-notes/t...

        • jessekv 4 days ago

          > Python type hints found a way to automatically write more performant Python

          I get your point, but... this is exactly the premise of mypyc ;)

      • pcthrowaway 4 days ago

        But typescript isn't a minifier or an optimizer. No part of typescript compiles it to anything that looks significantly different (besides enums).

        Sure, lots of build tools do this, but that's not Typescript.

        With very few exceptions, Typescript is written so that removing the Typescript-specific things makes it equivalent to the Javascript it transpiles to.

    • pseudopersonal 4 days ago

      Thanks for the clarification. For those of us who don't use TypeScript day to day, I feel that it is ambigious. Without clicking the link, you wouldn't know if it's about a compiler or a runtime. What if they announced a bun competitor?

      https://betterstack.com/community/guides/scaling-nodejs/node....

      • fastball 4 days ago

        Those are javascript runtimes, not TypeScript runtimes. The point stands.

        If you don't know enough about TypeScript to understand that TypeScript is not a runtime, I'm not sure why you would care about TypeScript being faster (in either case).

        • Izkata 4 days ago

          I thought the title was announcing someone created a Typescript runtime. It is misleading.

          Preact was "a faster React", for example.

        • zem 4 days ago

          if typescript code execution got that much faster it might be a reason for someone to look into the language even if they knew nothing about it.

          • timeflex 4 days ago

            There are plenty of other reasons to consider TypeScript, but again, what code execution are referring to? The V8 JavaScript engine?

            • zem 4 days ago

              that's not the point I was making - gp was wondering why someone who didn't even know typescript compiled to javascript and ran atop a javascript engine would care that it had gotten 10x faster.

      • alabastervlog 4 days ago

        From the title, my initial assumption was someone wrote a compiler & runtime for typescript that doesn't target javascript, which was very exciting. And I do work with typescript.

      • internetter 4 days ago

        > Without clicking the link, you wouldn't know if it's about a compiler or a runtime

        I mean I think generally you’d want to click the link and read the article before commenting

        • depr 4 days ago

          It has become a sport here to criticize titles for not explaining any random thing the commenter doesn't know. Generally these things are either in the article or they are very easily findable with a single web search.

    • jilles 4 days ago

      If you have to explain why something is not ambiguous it is by definition ambiguous.

      • hoten 4 days ago

        Maybe they aren't the audience. I don't see how this is ambiguous to anyone that actually uses typescript

      • totallykvothe 4 days ago

        No. Ambiguous means that a statement has many possible meanings, not simply that something might be confusing.

        • refulgentis 4 days ago

          I'm a bit confused:

          - It's not ambiguous because they mean $X.

          - It is ambiguous because it has many possible meanings.

          - It is not ambiguous because it has many possible meanings

        • jzackpete 4 days ago

          that would imply the existence of an objective authority on the meaning of the statement, which is debatable

    • pzo 4 days ago

      there is static hermes from Meta that do AoT compilation to native so I find it actually ambiguous. For a second I thought they did a compiler instead of transpile r.

    • maxloh 4 days ago

      > It's hard to tell if there will even be a runtime that somehow uses TS types to optimize even further.

      Yeah, that exists. AssemblyScript has an AOT compiler that generates binaries from statically typed code.

      • crabmusket 4 days ago

        AssemblyScript is a very limited subset of the language though.

    • seanmcdirmid 4 days ago

      > It's hard to tell if there will even be a runtime that somehow uses TS types to optimize even further

      Typescript's type system is unsound so it probably will never be very useful for an optimizing compiler. That was never the point of TS however.

    • xanth 4 days ago

      Unfortunately many TS users have a surface level understanding of TS leading them to believe that TS is "real"

    • pas 4 days ago

      I use TS a lot and still assumed they are embarking on a native runtime/compiler whatever epic journey.

  • fabian2k 4 days ago

    I don't think this is misleading for anyone familiar with Typescript. Typescript itself has no impact on performance, and it is known that the compilation and type-checking speed is often a problem. So I immediately assumed that it was about exactly that.

    • hexomancer 4 days ago

      When I read the title I thought maybe they implemented a typescript to binary (instead of javascript) code compiler that speeds up the program by 10x, it would also have the added benefit of speeding up the compiler by 10x!

      I don't think that is too far fetched either since typescript already has most of the type information.

  • gr__or 4 days ago

    I can think of a DOOM that WILL run faster…

    https://youtu.be/0mCsluv5FXA

    • pseudopersonal 4 days ago

      Ah thanks! I didn't realize there was a Doom running on the TS type system. I stand corrected

      • nonethewiser 4 days ago

        That is a really funny coincidence. Of all the examples you could have picked...

      • chamomeal 4 days ago

        lol it just “released” recently. Like in the last couple of weeks. It shook the typescript world.

        It’s been a crazy couple of weeks for TS!!

  • ggus 4 days ago

    Agree. TypeScript is primarily a programming language. Did they make the language faster? No. Hence, the title is misleading.

    • hansifer 4 days ago

      That's debatable. I think most people that work with TS see it as a syntax extension for JS. Do you think JSX is a programming language?

    • mmcnl 4 days ago

      For anyone who uses TypeScript on a daily basis it's not ambiguous at all. Everyone who works with TS knows the runtime code is JavaScript code that is generated by the TypeScript compiler. And it's also pretty common knowledge that JavaScript is quite fast, but TS itself is not.

      • alpaca128 4 days ago

        And if this post was about a TS compiler that emitted x86 executables you would be wrong and find out that it is indeed ambiguous.

        • c-hendricks 4 days ago

          Why would a hypothetical "tsx86" project write an article titled "10x faster typescript" instead of "10x faster binaries with tsx86 2.0"

          If you have to invent things for something to be considered ambiguous, is it really ambiguous?

  • Etheryte 4 days ago

    I don't think it's misleading at all, because you can't run Typescript. Typescript is either compiled, transpiled or stripped down into another language and that's what gets run in the end.

    • alpaca128 4 days ago

      You can't run Java either as it's compiled to bytecode, yet when someone says "we made Java 10x faster" you wouldn't assume that just the compilation got faster, right? When people market Rust projects as blazingly fast nobody assumes it's about compilation, in part because a blazingly fast Rust compiler would be a miracle. Outside of this comment section people have always been using a programming language name for this because everyone knows what they mean.

      It would be possible that MS wrote a TypeScript compiler that emits native binaries and that made the language 10x faster, why not?

    • wendyshu 4 days ago

      Sure you can run Typescript. It's a programming language, someone could always write an interpreter for it.

      • Etheryte 4 days ago

        You could, but currently I'm not aware of any widely used options. Both Deno and Node turn it into Javascript first and then run that.

    • zamadatix 4 days ago

      You could make the same argument of anything but bytecode and even then some would debate if it's really running directly enough on modern CPUs. In the end it still remains that you have the time it takes to build your project in a given language and the runtime performance of the end result. Those remain very useful distinctions regardless of how many layers of indirection occur between source code and execution.

      • Etheryte 4 days ago

        The difference here is that with Typescript, you're not really measuring Typescript's performance, but whatever your output language is. If transpile to Javascript, you're measuring that, if you output Wasm, you measure that, etc, and the result isn't really dictated by Typescript.

        • zamadatix 4 days ago

          Transpiling isn't the only possibility to run TypeScript code, it's just the way to do it right now. A long time ago interpreting was the most common way to run JavaScript, now it's to JIT it, but you can also compile it straight to platform byte code or transpile it to C if you really want. That you could transpile JavaScript to C doesn't mean all ways of doing it would be equally performant though.

          Transpiling in itself also doesn't remove the possibility of producing more optimized code, especially if the source has more information about the types. The official TypeScript compiler doesn't really do any of that right now (e.g. it won't remove a branch about how to handle a variable if its type equals a number even if it has the type information to know it can't have been set to one). Heck, it doesn't even (natively, you can always bolt this on yourself) support producing minified transpiled code to improve runtime parsing! In both examples it's not because transpilation prevents optimization though, it's just not done (or possibly worthwhile if TS only ever targets JS runtimes as JS JIT is extraordinarily good these days).

      • wrs 4 days ago

        Not really in the case of TypeScript, because (with very small exceptions) when you “compile” TypeScript you are literally just removing the TypeScript, leaving plain JavaScript. It’s just type annotations; it doesn’t describe any runtime behavior at all.

        • zamadatix 4 days ago

          That depends on both the target and the typescript features you use. In many cases, even when down leveling isn't involved, transpiled code can result in more than just stripping type info (particularly common in classes or things with helper functions). There's also nothing stopping a typescript compiler from optimizing transpiled (or directly compiled) code like any other compiler would, though the default typescript tools don't really go after any of that (or even produce a minified version itself using the additional type hints).

          • goatlover 4 days ago

            But the end result is still a JS runtime.

            • zamadatix 4 days ago

              Agreed, at least usually right now (it doesn't have to be forever, which would probably be the most realistic way for TypeScript to make meaningful runtime gains). That does not preclude the possibility of producing more optimal JavaScript code for the runtime to consume. I give a couple examples of that in the other comments.

    • surajrmal 4 days ago

      Tell that to the deno project.

      • hombre_fatal 4 days ago

        Deno compiles TS to JS before execution.

  • rs186 4 days ago

    This seems pedantic. As a TypeScript user who is aware of the conversations about build performance, the title is not ambiguous at all. I know exactly they are talking about build time.

    • lordofgibbons 4 days ago

      It was ambiguous to me. When someone says making a language X-times faster, it's natural to think about runtime performance, not compile times. I know TS runs on JS runtimes, but I assumed, based on the title, they created/modified a JS runtime to natively run TS fast.

  • darknavi 4 days ago

    That could be a little confusing but (generally today) TypeScript does not "run", JavaScript does.

    • 9rx 4 days ago

      > TypeScript does not "run"

      Except in the case of Doom, which can run on anything.

  • dimitropoulos 4 days ago

    look, not to argue with a stranger on hacker news, lol, but genuine calm question here: is this really a helpful nit? I know what you're getting at but the blogpost itself doesn't imply that JavaScript is 10x faster. I could complain, about your suggested change, that it's really `build and typecheck` time. It's a title. Sometimes they don't have _all_ the context. That's ok.

    • pseudopersonal 4 days ago

      It is for me. If someone says TypeScript is faster than X, they rarely mean the build time. I understand other people's points about TypeScript not being a runtime at all and only being a compiler, but when casually saying "TypeScript is faster than say ruby", people do not mean the compiler.

      • johnfn 4 days ago

        But no one actually says "TypeScript is faster than say ruby". They probably say "node is faster than say ruby" or maybe "bun is faster than say ruby". Perhaps they say "JavaScript is faster than say ruby", although even that is underspecified.

      • Tadpole9181 4 days ago

        Then read the article? I don't get it - Typescript, to anyone familiar, is not a language runtime. It does not optimize. It is a transpiler. If you don't even know this much about Typescript, you aren't the audience and lack prerequisite knowledge. Go read anything on the topic.

        If someone posted an article talking about the "handedness" of DNA or something, I wouldn't complain "oh, you confused me, I thought you were saying DNA has hands!"

      • dimitropoulos 4 days ago

        well, thanks for explaining. we might just simply disagree here. when I hear "TypeScript" I think of TypeScript, and when I hear "JavaScript" I think of JavaScript. I know what you mean re: casually speaking, but this is a blogpost from the TypeScript team. That context is there, too. I think if the same title were from an AWS release note, I'd totally see what you mean.

        • wrs 4 days ago

          Typescript is JavaScript at runtime. It’s not a separate language, just like Python with type annotations (TypePython?) is just Python at runtime. Both are just type annotations that get stripped away before anything tries to run the code. That’s the genius of the idea and why it’s so easily adopted.

          • Tadpole9181 4 days ago

            It is quite literally a separate language. Python's type hints are a part of the Python specification and all valid Python type hints will run in any compliant Python runtime. Typescript is not, in any way, valid JavaScript. The moment you add any type syntax, you can no longer run the code in Node or Browsers without enabling a special preprocess step.

            • hansifer 4 days ago

              Do you think JSX is a separate language?

              • Tadpole9181 4 days ago

                Yes, JSX is a superset of JS and will not work in any tooling that is not explicitly JSX compatible. JS grammars will not parse it, it's not standard.

      • pnw_throwaway 4 days ago

        That’d be the autism kicking in, you’re gonna have to be 10% less miserable if you want anyone to put up with you.

    • legohead 4 days ago

      misleading titles are a no-no on HN.

      I agree with pseudopersonal in that the title should be changed. technically it's not misleading, but not everyone uses or is familiar with typescript.

    • k__ 4 days ago

      It could have been a new TSC that compiles to WASM.

    • jasonjmcghee 4 days ago

      Unfortunately many people only look at headlines, so titles do matter. People take them at face value.

      • dimitropoulos 4 days ago

        yes, and TypeScript is not JavaScript. Objectively, every element of _TypeScript_, strictly speaking, is well known to be separate.

  • dcre 4 days ago

    The explanations are of course correct, but I think you're right and there's not much downside to being clearer in the title. Maybe they decided against saying "compiler" because the performance boost also covers the language server.

  • hinkley 4 days ago

    Also it’s 4 times faster but runs multithreaded, which was tricky to do in JavaScript (but easier now).

  • hot_gril 4 days ago

    So I'm +inf as fast using JS

  • _ink_ 4 days ago

    Does Deno benefit from that?

  • TechSquidTV 4 days ago

    Since you don't execute TypeScript, and TS never has anything to do with the end resulting app, I don't think it was misleading at all.

DrammBA 4 days ago

People seem very hurt that the creator of C# didn't pick C# for this very public project from a multi-trillion-dollar corp. I find it very refreshing, they defined logical requirements for what they wanted to do and chose Golang because it ticked more boxes than C#. This doesn't mean that C# sucks or that every C# project should switch to Golang, but there seems to be a very vocal minority affected by this logical decision.

  • bitmasher9 4 days ago

    My favorite benefit of Go over C# is that I don’t have to carry around a dotnet runtime to every service that touches my Typescript code.

    • nailer 4 days ago

      Can't the CLR tools just output native binaries now?

      • bitmasher9 4 days ago

        Can it? That’s awesome.

        Mostly these days I’m only aware of C# when it inconveniences me.

        • ryoukokonpaku 3 days ago

          It even produces smaller binaries than Go and the size scales much better as the codebase grows too. dotnet has come a long way since the olden days of .net framework.

          The reasons stated on github doesn't seem to be convincing imo.

          - platform support

          NativeAOT supports all platforms, the only one missing is Android which is marked experimental, but since they'd be treated as a "1st party" customer since they're both MS projects, this could be easily expedited. Even WASM is supported in NativeAOT using the LLVM toolchain and is often compared to perform better than Go's WASM target which doesn't use LLVM

          - Usage of functions and structs

          C# supports this, and you even have better control on layout and performance in this regard. Functions can easily be ported as static functions on static classes. They could even use F# which is even closer to Typescript if they wanted a more direct port as both languages compile to IL for NativeAOT.

          There must be more reasons as to why it didn't choose C# in this regard, likely non-technical related. A missed opportunity imo.

  • Matheus28 4 days ago

    I love their choice of Go because of how simple it is to generate a static executable with no dependencies (ie no dotnet runtime).

    • watermelon0 4 days ago

      With C# you can either bundle dotnet runtime with the executable, or use native AOT, which compiles to a binary without the runtime.

      However, in both native AOT and Go you actually have some parts of the runtime bundled in (e.g. garbage collector).

Starlord2048 4 days ago

[flagged]

  • noelwelsh 4 days ago

    I don't think this is accurate.

    Javascript is not slow because of GC or JIT (the JVM is about twice as fast in benchmarks; Go has a GC) but because JS as a language is not designed for performance. Despite all the work that V8 does it cannot perform enough analysis to recover desirable performance. The simplest example to explain is the lack of machine numbers (e.g. ints). JS doesn't have any representation for this so V8 does a lot of work to try to figure out when a number can be represented as an int, but it won't catch all cases.

    As for "working solution over language politics" you are entirely pulling that out of thin air. It's not supported by the article in any way. There is discussion at https://github.com/microsoft/typescript-go/discussions/411 that mentions different points.

    • dleeftink 4 days ago

      I think JS can really zoom if you let it. Hamsters.js, GPU.js, taichi.js, ndarray, arquero, S.js, are all solid foundations for doing things really efficiently. Sure, not 'native' performance or on the compile side, but having their computational models in mind can really let you work around the language's limitations.

      • jsheard 4 days ago

        JS can be pretty fast if you let it, but the problem is the fastest path is extremely unergonomic. If you always take the fastest possible path you end up more or less writing asm.js by hand, or a worse version of C that doesn't even have proper structs.

        • dleeftink 4 days ago

          I find these userland libraries particularly effective, because you'll never leave JS land, conveniently abstracting over Workers, WebGL/WebGPU and WASM.

    • nine_k 4 days ago

      JS, interestingly, has a notion of integers, but only in the form of integer arrays, like Int16Array.

      I wonder if Typescript could introduce integer type(s) that a direct TS -> native code compiler (JIT or AOT) could use. Since TS becomes valid JS if all type annotations are removed, such numbers would just become normal JS numbers from the POV of a JS runtime which does not understand TS.

      • maxloh 4 days ago

        AssemblyScript (for WASM) and Huawei's ArkTS (for mobile apps) already exist in this landscape. However, they are too specific in their use cases and have never gained public attention.

    • scottlawson 4 days ago

      you replied to an LLM generated comment. if you look at the posting history you can confirm it

  • nindalf 4 days ago

    > is this the beginning of a larger trend where JS/TS tooling migrates to native implementations

    No, it is not. It is a continuation of an existing trend

    You may be interested in esbuild (https://github.com/evanw/esbuild), turborepo (https://github.com/vercel/turborepo), biome-js (https://github.com/biomejs/biome) are all native reimplementations of existing projects in JS/TS. esbuild is written in Go, the others in Rust.

    > reveals something deeper: Microsoft prioritized shipping a working solution over language politics

    Its not that "deep". I don't see the politics either way, there are clearly successful projects using both Go and Rust. The only people who see "politics" are those who see people disagreeing, are unable to understand the substance of the disagreement and decide "ah, it's just politics".

  • christianqchung 4 days ago

    This is not accusatory, but do you write your comments with AI? I checked your profile and someone else had the same question a few days ago. It's the persistent structure of "it isn't X – it's Y" with the em dash (– not -) that makes me wonder this. Nothing to add to your comment otherwise, sorry.

    • mannykoum 4 days ago

      Sorry for being pedantic but they are using an en dash (–) not an em dash (—) which is a little strange because the latter is usually the one meant for adding information in secondary sentences—like commas and and parentheses. In addition, in most styles, you're not supposed to add spaces around it.

      So, I don't think the comment is AI-generated for this reason.

      • christianqchung 4 days ago

        You're right, oops. I agree with your reasoning (comment still gives off slop vibes but that's unprovable). But the parent has been flagged, so I'm not sure if that means admins/dang has agreed with me or if it was flagged for another reason.

        • johnisgood 4 days ago

          I think anyone can flag a comment, and if enough people flag a comment, it will become flagged.

    • hombre_fatal 4 days ago

      em-dash is shift-option-hyphen on macOS, so it's not a good heuristic—I use it myself.

      They're using en-dash which is even easier: option-hyphen.

      This is the wrong way to do AI detection. For one, LLM would have used the right dash. But at least find someone wasting our time with belabored or overwrought text that doesn't even interact with anything.

    • never_inline 4 days ago

      This is definitely AI, repetitive and reads in style of a marketing copy / sensational report.

      • sapiogram 4 days ago

        They're not "definitely" an AI. Sounds like a normal Go enthusiast to me.

        • nindalf 4 days ago

          A Go enthusiast who’s never heard of esbuild? Not impossible, but unlikely.

          • sapiogram 3 days ago

            So, a go enthusiast who does exclusively backend work? Seems likely enough, given that community's overall disdain for Javascript.

    • niederman 4 days ago

      You know, some humans use the correct dash too...

    • ilikegreen 4 days ago

      The em dash thing is not very conclusive. I have been writing with the em dash for many years, because it looks better and is very accessible on Mac OS (long press on dash key), while carrying a different tone than the simple dash. That, and I read some Tristram Shandy.

    • harrall 4 days ago

      Two hyphens (-) make a em dash (—) on Apple devices and many word processors.

      In the pre-Unicode days, people would use two hyphens (--) to simulate em dashes.

    • nindalf 4 days ago

      That would explain a lot.

  • the_mitsuhiko 4 days ago

    > The Go choice over Rust/C# reveals something deeper: Microsoft prioritized shipping a working solution over language politics. Go's simplicity (compared to Rust) and deployment model (compared to C#) won the day.

    I'm not sure that this is particularly accurate for the Rust case. The goal of this project was to perform a 1:1 port from TypeScript to a faster language. The existing codebase assumes a garbage collector so Rust is not really a realistic option here. I would bet they picked GCed languages only.

    • pc86 4 days ago

      I can't imagine the devs at Microsoft have any issues with C#'s "deployment model."

      • Yoric 4 days ago

        I can imagine C# being annoying to integrate into some CIs, for instance. Go fits a sweet spot, with its fast compiler and usually limited number of external dependencies.

      • giancarlostoro 4 days ago

        I assume they picked Go because the binaries can be very stand alone.

  • register 4 days ago

    I don't see why Go deployment model is superior to C#. You can easily build native binaries in C# as well nowadays.

    • gwbas1c 4 days ago

      I get the impression that, because Go has a lot of similar semantics to Typescript, it was easier to port to Go than other languages.

      From https://github.com/microsoft/typescript-go/discussions/411

      > Idiomatic Go strongly resembles the existing coding patterns of the TypeScript codebase, which makes this porting effort much more tractable.

      > We also have an unusually large amount of graph processing, specifically traversing trees in both upward and downward walks involving polymorphic nodes. Go does an excellent job of making this ergonomic, especially in the context of needing to resemble the JavaScript version of the code.

      Personally, I'm a big believer in choosing the right language for the job. C# is a great language, and often is "good enough" for many jobs. (I've done it for 20 years.) That doesn't mean it's always the best choice for the job. Likewise, sometimes picking a "familiar language" for a target audience is better than picking a personal favorite.

    • guappa 4 days ago

      Come on… 1 statically linked executable and it can cross build incredibly easily. There's no comparison even.

  • rvz 4 days ago

    > When a language team abandons self-hosting (TS in TS) for raw performance (Go), it signals we've hit fundamental limits in JS/TS for systems programming.

    I hope you really mean for "userspace tools / programs" which is what these dev-tools are, and not in the area of device drivers, since that is where "systems programming" is more relevant.

    I don't know why one would choose JS or TS for "systems programming", but I'm assuming you're talking about user-space programs.

    But really, those who know the difference between a compiled language and a VM-based language know the obvious fundamental performance limitations of developer tools written in VM-based languages like JS or TS and would avoid them as they are not designed for this use case.

    • pjmlp 4 days ago

      Back in my day, writing compilers was part of systems programming.

      • Yoric 4 days ago

        Yeah, the term has changed meaning several times. Early on, "systems programmer" meant basically what we call a "developer" now (by opposition to a programmer or a researcher).

        • sethaurus 4 days ago

          At that time, what would have been the distinction between "programmer" and "developer"?

          • Yoric 4 days ago

            If my memory serves, the "programmer" was essentially a mathematician, working on a single algorithm, while a "system developer" was building an entire system around it.

  • gherkinnn 4 days ago

    It *almost* sounds like you're telling the authors, one of which posted this, what their motivations are.

  • sebzim4500 4 days ago

    >The Go choice over Rust/C# reveals something deeper: Microsoft prioritized shipping a working solution over language politics. Go's simplicity (compared to Rust) and deployment model (compared to C#) won the day. Even Anders Hejlsberg – father of C# – chose Go for pragmatic reasons!

    I don't follow. If they had picked Rust over Go why couldn't you also argue that they are prioritising shipping a working solution over language politics. It seems like a meaningless statement.

  • nine_k 4 days ago

    Go with parametric types is already a reasonably expressive language. Much more expressive than C in which a number of compilers has been written, at least initially; not everyone had the luxury of using OCaml or Haskell.

    There is already a growing number of native-code tools of the JS/TS ecosystem, like esbuild or swc.

    Maybe we should expect attempts of native AOT compilation for TS itself, to run on the server side, much like C# has an AOC native-code compiler.

  • ninetyninenine 4 days ago

    I wish there was a language like rust without the borrow checking and lifetimes that was also popular and lives in the same area as go. Because I think go is actually the best language in this category but it’s only the best because there is nothing else. All in all golang is not an elegant language.

    • noelwelsh 4 days ago

      O'Caml is similar, now that it has multicore. Scala is also similar, though the native code side (https://scala-native.org/en/stable/) is not nearly as well developed as the JVM side.

      • ninetyninenine 4 days ago

        The problem with O'Caml is it won't get popular because people are afraid of FP. But I would be totally down to use it.

    • sapiogram 4 days ago

      Rust loses a lot of its nice properties without borrow checking and lifetimes, though. For example, resources no longer get cleaned up automatically, and the compiler no longer protects you against data races. Which in turn makes the entire language memory unsafe.

      • SkiFire13 4 days ago

        I believe OP meant to give it a GC like in Go, while keeping other features from Rust from enums/match/generics/traits/etc etc.

        This should prevent most of the memory safety issues, though data races could still be tricky (e.g. Go is memory unsafe due to data races)

      • throw-the-towel 4 days ago

        OTOH it would still have Rust's sane type system and all the nice features it makes possible.

        • nine_k 4 days ago

          OCaml and Haskell already have that nice type system (and even more nice). If OCaml's syntax bothers you, there is Reason [1] which is a different frontend to the same compiler suite.

          Also in this space is Gleam [2] which targets Erlang / OTP, if high concurrency and fault tolerance is your cup of tea.

          [1]: https://reasonml.github.io/

          [2]: https://gleam.run/

    • duped 4 days ago

      That language is Rust, though.

  • scotty79 4 days ago

    > Go's simplicity

    I think they went for Go mostly because of memory management, async and syntactic similarity to interpreted languages which makes total sense for a port.

  • madeofpalk 4 days ago

    > it signals we've hit fundamental limits in JS/TS for systems programming

    Really is this a surprise to anyone? I don't think anyone thinks JS is suitable for 'systems programming'.

    Javascript is the language we have for the browser - there's no value in debating it's merits when it's the only option. Javascript on the server has only ever accrued benefits from being the same language as the browser.

ragnese 4 days ago

But, I was told that programming language choice doesn't matter and that I can write slow/bad code in any language...

/s

  • rvz 4 days ago

    There you go.

    All the bootcamp cargo culting crew have pumped these lies such as "the language doesn't matter" or "learn coding in 1 week for a SWE job with JS / TS" and it has caused the increase in low quality software and with several developers asking how to improve or add "performance" optimizations as such.

    What we have just seen is that the TS team has admitted that a limit has been reached and *almost always* the solution is either porting it to a compiled language or relying on scaling with new computers with new processors in accordance to Moore's Law to get performance for free.

    Now the bootcampers are rediscovering why we need "static typing" and why a "compiled language" is more performant than a VM-based language.

    • ragnese 4 days ago

      Can you imagine the progress we could've made by now if people just tried to use the right tool for the job instead of trying to make the wrong tool good enough?

      All the time spent trying to optimize JITs for JavaScript engines, or alternative Python implementations (e.g., PyPy), and fruitless efforts like trying to get JVMs to start fast enough for use in cloud "lambda function" applications. Ugh...

      • wiseowise 4 days ago

        > and fruitless efforts like trying to get JVMs to start fast enough for use in cloud "lambda function" applications

        This is how we got Graal, why would you call it "fruitless effort"?

        • ragnese 3 days ago

          Okay, so "fruitless" wasn't the right word. If you try to build an actual house out of LEGO bricks, you can eventually succeed and therefore the endeavor was technically "fruitful." I think I should've described it as "wasteful" effort, or as an inefficient use of brilliant minds' time.

          For my specific example of JVMs on lambdas, I wasn't really thinking about GraalVM. I was more thinking of all the hacky, fiddly, things that people were doing to "warm up" their JVM-based lambdas. Like some of the stuff described in this article I just randomly grabbed from a web search: https://medium.com/@marcos.duarte242/keeping-your-aws-lambda...

          The reality is that JVM languages were just the wrong tool for the job of writing short-lived applications.

          Even though I wasn't really thinking about GraalVM, it might not be shocking that I don't really like it either- for the same kind of reason(s). Java was designed as a fairly dynamic language: you have runtime reflection, dynamic class loading (hot swapping), and various other (admittedly niche) features. So, Java code destined for GraalVM has to be written differently than Java code destined for a standard JVM runtime, which is an inverted way of saying that the nominal goal of GraalVM is technically impossible (you can't, generally, write a native compiler for the Java programming language). So, again, we're taking a language that was designed and optimized for specific runtime properties and we're forcing that square peg into the round hole of AOT compilation. You want native performance? Use a native language!

          It feels like someone trying to design a hammer to also be a really shitty screwdriver. Why not just use a hammer sometimes and a screwdriver other times?

  • dagw 4 days ago

    You can write slow code in any language, but you cannot write fast code in any language.

    • ragnese 4 days ago

      I didn't include every variant I've ever read, but there have been no shortage of people saying that the only thing that matters is your algorithms.

      Every time I've said that languages like Python, JavaScript, and basically any other language where it's hard to avoid heap allocations, pointer chasing, and copious data copies are all slow, there are plenty of people who come out of the woodwork to inform me that it's all negligible.

      • dagw 4 days ago

        no shortage of people saying that the only thing that matters is your algorithms.

        To be a little bit fair to those people, I have been in many situations where people go "my matlab/python code is too slow, I must re-write it in C", and I've been able to get an order of magnitude improvement by re-writing the code in the same language. Hell I've ported terrible Fortran code to python/numpy and gotten significant performance improvement. Of course taking that well written code and re-writing that in well written C will probably give you a further order of magnitude improvement. Fast code in a slow language can beat slow code in a fast language, but obviously never beat fast code in a fast language.

        • ragnese 4 days ago

          For sure. I agree with everything you say, and I've experienced the same thing 100 times, myself--including the specific scenario of speeding up someone's MATLAB code by multiple orders of magnitude by vectorizing the crap out of it. People seem to be almost drawn to quadratic-or-worse algorithms, even when I'd expect them to know better.

          I'm just a little bitter because of how many times I've been shushed in places like programming language subreddits and here when I've pointed out how inefficient some cool new library/framework/paradigm is. It feels like I'm either being gaslit or everyone else is in denial that things like excessive heap allocations really do still matter in 2025, and that JITs almost never help much with realistic workloads for a large percentage of applications.

  • nipah 4 days ago

    Many people say this, but it is obviously bullshit. But most things people say all the time is bullshit, so I would not bother with it that much, it's not like people are saying "Programming languages don't matter, see here my affirmation is backed by a hundred statistics and data heavily reviewed and strong literature", it is more like "Programming languages don't matter, well at least I feel like it, the same way flowers smell like blue or something".

    • ragnese 3 days ago

      No doubt! I just like to point out the bullshit when I can. :)

tosh 4 days ago

tl;dr: TypeScript compiler (!) was implemented in TypeScript, new one is in Go

half of the perf gain is from moving to native code, other half is from concurrency

29athrowaway 4 days ago

tl;dr

10x faster compilation, not runtime performance

lawls 4 days ago

still javascript though

Delomomonl 4 days ago

I don't get it.

Why is typescript not already a standard natively supported by browers?!

  • Cthulhu_ 4 days ago

    It kinda is already; strip type information and you've got valid JS. NodeJS supports running Typescript nowadays with the exception of some uneraseable syntax that is being discouraged, I'm sure it's only a matter of time before that bubbles up to V8 and other browser JS engines.

atak1 4 days ago

Curious how this is going to affect Cursor - I'm assuming it'll just be a drop-in replacement and we can expect Cursor to get the same speed-up as VSCode.

zidad 4 days ago

And the lesson is; don't build anything that needs to be performant in TypeScript because it's so slow?

  • rvz 4 days ago

    Correct.

rvz 4 days ago

So in order to get "Faster TypeScript" you have to port the existing "transpiler" in a complied language that delivers said faster performance.

This is an admission that these JavaScript based languages (including TypeScript) are just completely unsuitable for these performance and scalable situations, especially when the codebase scales.

As long as it is a compiled language with reasonable performance and with proper memory management situations, Go is the unsurprising choice, but the wise choice to solve this problem.

But this choice definitively shows (and as admitted by the TS team) how immature both JavaScript and TypeScript are in performance and scalability scenarios and should be absolutely avoided for building systems that need it. Especially in the backend.

Just keep it in the frontend.

  • Cthulhu_ 4 days ago

    They're not getting "faster typescript", they're getting "a faster typescript transpiler / type checker"; subtle but important difference. The runtime of TS is Javascript engines, and most of "typescript transpilation" is pretty straightforward removal of type information.

    Anyway, JS is not immature in performance per se, but in this particular use case, a native language is faster. But they had to solve the problem first before they could decide what language was best for it.