This proposal reeks of "What color is your function?" https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... . The distinction between sync functions and async functions keeps intruding into every feature. As we can see here, there are Symbol.dispose and Symbol.asyncDispose, and DisposableStack and AsyncDisposableStack.
I am so glad that Java decided to go down the path of virtual threads (JEP 444, JDK 21, Sep 2023). They decided to put some complexity into the JVM in order to spare application developers, library writers, and human debuggers from even more complexity.
I disagree. Hiding async makes reasoning about code harder and not easier. I want to know whether disposal is async and potentially affected by network outages, etc.
This is because normal execution and async functions form distinct closed Cartesian categories in which the normal execution category is directly embeddable in the async one.
All functions have color (i.e. particular categories in which they can be expressed) but only some languages make it explicit. It's a language design choice, but categories are extremely powerful and applicable beyond just threading. Plus, Java and thread based approaches have to deal with synchronization which is ... Difficult.
(JavaScript restricts itself to monadic categories and more specifically to those expressible via call with continuation essentially)
In plain javascript it's not a problem. Types are ducked so if you receive a result or a promise it doesn't matter. You can functionally work around the "color problem" using this dynamism.
It's only when you do something wacky like try to add a whole type system to a fully duck typed language that you run into problems with this. Or if you make the mistake of copying this async/await mechanism and then hamfistedly shove it into a compiled language.
Note that depending on use case, it may be preferrable to use `DisposableStack` and `AsyncDisposableStack` which are part of the `using` proposal and have built-in support for callback registration.
This is notably necessary for scope-bridging and conditional registration as `using` is block-scoped so
if (condition) {
using x = { [Symbol.dispose]: cleanup }
} // cleanup is called here
But because `using` is a variant of `const` which requires an initialisation value which it registers immediately this will fail:
using x; // SyntaxError: using missing initialiser
if (condition) {
x = { [Symbol.dispose]: cleanup };
}
and so will this:
using x = { [Symbol.dispose]() {} };
if (condition) {
// TypeError: assignment to using variable
x = { [Symbol.dispose]: cleanup }
}
Instead, you'd write:
using x = new DisposableStack;
if (condition) {
x.defer(cleanup)
}
Similarly if you want to acquire a resource in a block (conditionally or not) but want the cleanup to happen at the function level, you'd create a stack at the function toplevel then add your disposables or callbacks to it as you go.
> Integration of [Symbol.dispose] and [Symbol.asyncDispose] in web APIs like streams may happen in the future, so developers do not have to write the manual wrapper object.
So for the foreseeable future, you have a situation where some APIs and libraries support the feature, but others - the majority - don't.
So you can either write your code as a complicated mix of "using" directives and try/catch blocks - or you can just ignore the feature and use try/catch for everything, which will result in code that is far easier to understand.
I fear this feature has a high risk of getting a "not practically usable" reputation (because right now that's what it is) which will be difficult to undo even when the feature eventually has enough support to be usable.
Which would be a real shame, as it does solve a real problem and the design itself looks well thought out.
This is the situation in JavaScript for the last 15 years: new language features come first to compilers like Babel, then to the language spec, and then finally are adopted for stable APIs in conservative NPM packages and in the browser. The process from "it shows up as a compiler plugin" to "it's adopted by some browser API" can often be like 3-4 years; and even after it's available in "evergreen" browsers, you still need to have either polyfills or a few more years of waiting for it to be guaranteed available on older end-user devices.
Developers are quite used to writing small wrappers around web APIs anyways since improvement to them comes very slowly, and a small wrapper is often a lesser evil compared to polyfills; or the browser API is just annoying on the typical use path so of course you want something a little different.
At least, I personally have never seen a new langauge feature that seems useful and thought to myself "wow this is going to be hard to use"
In practice, a lot of stuff has already implemented this using forwards-compatible polyfills. Most of the backend NodeJS ecosystem, for example, already supports a lot of this, and you have been able to use this feature quite effectively for some time (with a transpiler to handle the syntax). In fact, I gave a couple of talks about this feature last year, and while researching for them, I was amazed by how many APIs in NodeJS itself or in common libraries already supported Symbol.dispose, even if the `using` syntax wasn't implemented anywhere.
I suspect it's going to be less common in frontend code, because frontend code normally has its own lifecycle/cleanup management systems, but I can imagine it still being useful in a few places. I'd also like to see a few more testing libraries implement these symbols. But I suspect, due to the prevalence of support in backend code, that will all come with time.
This is why TC39 needs to work on fundamental language features like protocols. In Rust, you can define a new trait and impl it for existing types. This still has flaws (orphan rule prevents issues but causes bloat) but it would definitely be easier in a dynamic language with unique symbol capabilies to still come up with something.
But this does leak the "trait conformance" globally; it's unsafe because we don't know if some other code wants their implementation of dispose injected to this class, if we're fighting, if some key iteration is going to get confused, etc...
How would a protocol work here? To say something like "oh in this file or scope, `ImageBitmap.prototype[Symbol.dispose]` should be value `x` - but it should be the usual `undefined` outside this scope"?
> So for the foreseeable future, you have a situation where some APIs and libraries support the feature, but others - the majority - don't.
Welcome to the web. This has pretty much been the case since JavaScript 1.1 created the situation where existing code used shims for things we wanted, and newer code didn't because it had become part of the language.
Yes, this will be familiar to people creating objects or classes that are intended to represent iterable collections. You do the same dynamic key syntax with a class declaration or object literal, but use `Symbol.iterator` as the well-known symbol for the method.
Other posters correctly described _what_ this is, but I didn't see anyone answer _why_.
Using a Symbol as the method name disambiguates this method from any previously-defined methods.
In other words, by using a Symbol for the method name (and not using a string), it's impossible to "name collide" on this new API, which would accidentally mark a class as disposable.
The premise is that you can always access an object's properties using indexing syntax as well as the normal dot syntax. So `object.foo` is the equivalent of `object["foo"]` or `object["f" + "o" + "o"]` (because the value inside the square brackets can be any expression). And if `object.foo` is a method, you can do `object.foo()` or `object ["foo"]()` or whatever else as well.
Normally, the key expression will always be coerced to a string, so if you did `object[2]`, this would be the equivalent of object["2"]. But there is an exception for symbols, which are a kind of unique object that is always compared by reference. Symbols can be used as keys just as they are, so if you do something like
You should see in the console that this object has a special key that is a symbol, as well as the normal "foo" attribute.
The last piece of the puzzle is that there are certain "well known symbols" that are mostly used for extending an object's behaviour, a bit like __dunder__ methods in Python. Symbol.dispose is one of these - it's a symbol that is globally accessible and always means the same thing, and can be used to define some new functionality without breaking backwards compatibility.
I hope that helps, feel free to ask more questions.
That's also possible, and it's common when using this pattern, but the specific syntax in the original question was I believe property access, and not part of a property literal. I didn't bring that up because I thought my comment was long enough and I wanted to explain that specific syntax. But yeah, you also have this syntax to set properties in object literals, and a similar syntax in classes.
Reminds me of C#.. IDisposible and IAsyncDisposible in C# helps a lot to write good mechanisms for things that should actually be abstracted in a nice way (such as locks handling, queue mechanisms, temporary scopes for impersonation, etc).
That's because the author of the proposal is from Microsoft and has repeatedly shot down counter-suggestions that made the syntax look different from C#.
It's basically lifted from C#'s, the original proposal makes no secret of it and cites all of Python's context managers, Java's try with resources, C#'s using statements, and C#'s using declarations. And `using` being the keyword and `dispose` the hook method is a pretty big hint.
Resource management, especially when lexical scoping is a feature, is why some of us have been working on bringing structured concurrency to JS: https://bower.sh/why-structured-concurrency
That's the neat part, you don't. 90% percent of webdev is "upgrading" things in ways nobody asked for or appreciates because it's just taken for granted that your codebase will grow mold or something if it isn't stirred often enough and the other 10% of the work is fixing legitimate problems resulting from the first 90%. Of course, no probability is ever actually 1.0 so there will be rare occasions that you need to understand something that ChatGP-err, sorry my bad, i meant to say "something that you" wrote more than a year ago you suggest to your boss that this bug should be preserved until next time there's a new hire because it would make a great "jumping-on" point and until then the users will still be able to get work done by using the recommended work-around, whic his installing windows XP Pirate Edition onto a VM and using IE6 to get into the legacy-portal that somehow inexplicably still exists 20 years after the corporate merger that was supposed to make it obsolete.
For starters, your code is so full of serious syntax errors that in some places it's not even close to valid JavaScript. This is my best guess reconstruction:
(async (e) => {
await doSomething()
while (!done) {
({ done, value }) = await reader.read()
}
promise
.then(goodA, badA)
.then(goodB, badB)
.catch(err => console.log(err))
.finally(() => {
using stack = new DisposableStack()
stack.defer(() => console.log('done.'))
})
})()
But more importantly, this isn't even close to anything a reasonable JS dev would ever write.
1. It's not typical to mix await and while(!done), I can't imagine what library actually needs this. You usually use one or the other, and it's almost always just await:
await doSomething()
const value = await readFully(reader)
2. If you're already inside an Async IIFE, you don't need promise chains. Just await the stuff as needed, unless promise chains make the code shorter and cleaner, e.g.:
const json = await fetch(url).then(r => r.json())
3. Well designed JS libraries don't usually stack promise handlers like the {good,bad}{A,B} functions you implied. You usually just write code and have a top level exception handler:
using stack = new DisposableStack()
stack.defer(() => console.log('done.'))
try {
const goodA = await promise
const goodB = await goodA
const goodC = await goodB
return goodC
}
catch(e) {
myLogErr(e)
}
// finally isn't needed, that's the whole point of DisposableStack
4. We don't usually need AIIFEs anymore, so the outer layer can just go away.
That is a matter of opinion. JavaScript allows you to use either convention at your preference. Personally, I feel my code looks much, much cleaner without semicolons. I also use whitespace liberally.
For the longest time, I used them just in case it would otherwise cause a bug. But TypeScript fully takes this into account and checks for all these scenarios.
note about that await block: "await" will await the _entire_ return, so if "promise" returns another promise ("goodA") which in turn also returns a promise ("goodB"), which in turn returns _another_ promise that ends up resolving as the non-promise value "goodC", then "await promise" just... gets you "goodC", directly.
The "example code" (if we can call it that) just used goodA and goodB because it tried to make things look crazy, by writing complete nonsense: none of that is necessary, we can just use a single, awaiting return:
Done. "await" waits until whatever it's working with is no longer a promise, automatically either resolving the entire chain, or if the chain throws, moving us over to the exception catching part of our code.
By programming in the language for a living and being familiar with the semantics of the language's keywords -- likely the same way anyone else understands their preferred language?
It's not that they're hard to understand, it's that they're much denser. From Factor's examples page:
> 2 3 + 4 * .
There's a lot more there to mentally parse than:
> (2 + 3) * 4
It's the same as when Rob Pike decries syntax highlighting. No, it's very useful to me. I can read much quicker with it.
It's the same principle behind how we use heuristics to much more quickly read words by sipmly looking at the begninnings and ends of each word, and most of the time don't even notice typos.
Well, I guess it might boil down to how one "thinks"?
Some people prefer:
2 3 + 4 *
Some other people prefer:
(* 4 (+ 2 3))
And some other people prefer:
(2 + 3) * 4
I personally find the last one easier to read or understand, but I have had my fair share of Common Lisp and Factor. :D
Syntax highlighting is useful for many people, including me. I can read much quicker with it, too. I know of some people who write Common Lisp without syntax highlighting though. :)
Also, sticking to one style and not mixing all the wildly different approaches to do the same thing.
JS, like HTML has the special property that you effectively cannot make backwards-incompatible changes ever, because that scrappy webshop or router UI that was last updated in the 90s still has to work.
But this means that the language is more like an archeological site with different layers of ruins and a modern city built on top of it. Don't use all the features only because they are available.
But browsing the web with dev tools open, the amount of error messages on allmost any site implies to me, it is more than one person who doesn't understand something.
It just seems like it's happening way more often in JavaScript, but I've seen absolute horrid and confusing Python as well.
The JavaScript syntax wasn't great to begin with, and as features are added to the language it sort of has to happen within the context of what's possible. It's also becoming a fairly large language, one without a standard library, so things just sort of hang out in a global namespace. It's honestly not to dissimilar to PHP, where the language just grew more and more functions.
As others point out there's also some resemblance to C#. The problem is that parts of the more modern C# is also a confusing mess, unless you're a seasoned C# developer. The new syntax features aren't bad, and developers are obviously going to use them to implement all sorts of things, but if you're new to the language they feel like magical incantations. They are harder to read, harder to follow and doesn't look like anything you know from other language. Nor are they simple enough that you can just sort of accept them and just type the magical number of brackets and silly characters and accept that it somehow work. You frequently have no idea of what you just did or why something works.
I feel like Javascript has reached the point where it's a living language, but because of it's initial implementation and inherit limits, all these great features feel misplaced, bolted on and provides an obstacle for new or less experienced developers. Javascript has become an enterprise language, with all the negative consequences and baggage that entails. It's great that we're not stuck with half a language and we can do more modern stuff, it just means that we can't expect people to easily pick up the language anymore.
For me, personally, heavy use of the => operator (which happens to coinside with my main complaint of a lot JavaScript code and anonymous functions). You can avoid it, but is pretty standard.
Very specifically I also looking into JWT authentication in ASP.NET Core and found the whole thing really tricky to wrap my head around. That's more of a library, but I think many of the usage examples ends up being a bunch of spaghetti code.
It all starts with being well-formatted and having a proper code editor instead of just a textarea on a webpage, so you'd get the many error notices for that code (because it sure as hell isn't valid JS =)
And of course, actually knowing the language you use every minute of the day because that's your job helps, too, so you know to rewrite that nonsense to something normal. Because mixing async/await and .then.catch is ridiculous, and that while loop should never be anywhere near a real code base unless you want to get yelled at for landing code that seems intentionally written to go into a spin loop under not-even-remotely unusual circumstances.
How so?
GP complained about JS lack of types. I pointed out that most JS actually benefits from types, given it's typically authored in TS. No moving goalposts, no "true Scotsman" arg.
Can someone explain why they didn’t go with (anonymous) class destructors? Or something other than a Symbol as special object key. Especially when there are two Symbols (different one for asynchronous) which makes it a leaky abstraction, no?
Destructors require deterministic cleanup, which advanced GCs can't do (and really don't want to either from an efficiency perspective). Languages with advanced GCs have "finalizers" called during collection which are thus extremely unreliable (and full of subtle footguns), and are normally only used as a last resort solution for native resources (FFI wrappers).
Hence many either had or ended up growing means of lexical (scope-based) resource cleanup whether,
- HoF-based (smalltalk, haskell, ruby)
- dedicated scope / value hook (python[1], C#, Java)
- callback registration (go, swift)
[1]: Python originally used destructors thanks to a refcounting GC, but the combination of alternate non-refcounted implementations, refcount cycles, and resources like locks not having guards (and not wanting to add those with no clear utility) led to the introduction of context managers
Destructors I other languages are typically used for when the object is garbage collected. That has a whole bunch of associated issues, which is why the pattern is often avoided these days.
The dispose methods on the other hand are called when the variable goes out of scope, which is much more predictable. You can rely on for example a file being closed ot a lock released before your method returns.
JavaScript is already explicit about what is synchronous versus asynchronous everywhere else, and this is no exception. Your method needs to wait for disposing to complete, so if disposing is asynchronous, your method must be asynchronous as well. It does get a bit annoying though that you end up with a double await, as in `await using a = await b()` if you're not used to that syntax.
As for using symbols - that's the same as other functionality added over time, such as iterator. It gives a nice way for the support to be added in a backwards-compatible way. And it's mostly only library authors dealing with the symbols - a typical app developer never has to touch it directly.
For garbage collected languages destructors cannot be called synchronously in most cases because the VM must make sure that the object is inaccessible first. So it will not work very deterministically, and also will expose the JS VM internals. For that JS already has WeakRef and FinalizationRegistry.
So that the read lock is lifted even if reader.read() throws an error.
Does this only hold for long running processes? In a browser environment or in a cli script that terminates when an error is thrown, would the lock be lifted when the process exits?
The spec just says that when a block "completes" its execution, however that happens (normal completion, an exception, a break/continue statement, etc.) the disposal must run. This is the same for "using" as it is for "try/finally".
When a process is forcibly terminated, the behavior is inherently outside the scope of the ECMAScript specification, because at that point the interpreter cannot take any further actions.
So what happens depends on what kind of object you're talking about. The example in the article is talking about a "stream" from the web platform streams spec. A stream, in this sense, is a JS object that only exists within a JS interpreter. If the JS interpreter goes away, then it's meaningless to ask whether the lock is locked or unlocked, because the lock no longer exists.
If you were talking about some kind of OS-allocated resource (e.g. allocated memory or file descriptors), then there is generally some kind of OS-provided cleanup when a process terminates, no matter how the termination happens, even if the process itself takes no action. But of course the details are platform-specific.
Browser web pages are quintessential long running programs! At least for Notion, a browser tab typically lives much longer (days to weeks) than our server processes (hours until next deploy). They're an event loop like a server often with multiple subprocesses, very much not a run-to-completion CLI tool. And errors do not terminate a web page.
The order of execution for unhandled errors is well-defined. The error unwinds up the call stack running catch and finally blocks, and if gets back to the event loop, then it's often dispatched by the system to an "uncaught exception" (sync context) or "unhandled rejection" (async context) handler function. In NodeJS, the default error handler exits the process, but you can substitute your own behavior which is common for long-running servers.
All that is to say, that yes, this does work since termination handler is called at the top of the stack, after the stack unwinds through the finally blocks.
Yeah, great for that use-case - memory management; it's great to get the DisposeStack that allows "moving" out of the current scope too, that's handy.
I adopted it for quickjs-emscripten (my quickjs in wasm thingy for untrusted code in the browser) but found that differing implementations between the TypeScript compiler and Babel lead to it not being reliably usable for my consumers. I ended up writing this code to try to work around the polyfill issues; my compiler will use Symbol.for('Symbol.dispose'), but other compilers may choose a different symbol...
It's similar, but more inspired by C#'s "using declaration", an evolution of the using blocks, which are the C# version of try-with-resource: `using` declarations don't introduce their own block / scope.
Yeah, as someone else has pointed out it's C# inspired, this is a C# example:
public void AMethod() {
//some code
using var stream = thing.GetStream();
//some other code
var x = thing.ReadToEnd();
//file will be automatically disposed as this is the last time file is used
//some more code not using file
} //any error means file will be disposed if initialized
You can still do the wrap if you need more fine grained control, or do anything else in the finally.
You can even nest them like this:
using var conn = new SqlConnection(connString);
using var cmd = new SqlCommand(cmd);
conn.Open();
cmd.ExecuteSql();
Edit: hadn't read the whole article, the javascript version is pretty good!
Not really. Both are ways to perform deterministic resource management, but RAII is a branch of deterministic resource management which most GC'd languages can not use as they don't have deterministic object lifetimes.
This is inspired by similar constructs in Java, C#, and Python (and in fact lifted from C# with some adaptation to JS's capabilities), and insofar as those were related to RAII, they were a step away from it, at least when it comes to Python: CPython historically did its resource management using destructors which would mostly be reliably and deterministically called on refcount falling to zero.
However,
1. this was an issue for non-refcounted alternative implementations of Python
2. this was an issue for the possibility of an eventual (if unlikely) move away from refcounting in CPython
3. destructors interact in awkward ways with reference cycles
4. even in a reference-counted language, destructors share common finaliser issues like object resurrection
Thus Python ended up introducing context managers as a means of deterministic resource management, and issuing guidance to avoid relying on refcounting and RAII style management.
The error was probably trying to write a generic `using`. In my experience languages which use higher order functions or macros for scope cleanup tend to build high-level utilities directly onto the lowest level features, it can be a bit repetitive but usually not too bad.
So in this case, rather than a generic `using` built on the even more generic `try/except` you should probably have built a `withFile` callback. It's a bit more repetitive, but because you know exactly what you're working with it's a lot less error prone, and you don't need to hope there's a ready made protocol.
It also provides the opportunity of upgrading the entire thing e.g. because `withFile` would be specialised for file interaction it would be able to wrap all file operations as promise-based methods instead of having to mix promises and legacy callbacks.
I imagine there will eventually be lint rules for this somewhere and many of those using such a modern feature are likely to be using static analysis via eslint to help mitigate the risks here, but until it’s more established and understood and lint rules are fleshed out and widely adopted, there is risk here for sure.
What you describe is already the status quo today. This proposal is still a big improvement as it makes resource management less error prone when you're aware to use it and _standardizes the mechanism through the symbol_. This enables tooling to lint for the situations you're describing based on type information.
There is pretty strong precedent for this design over in .NET land - if it was awful or notably inferior to `defer` I'm sure the Chrome engineering team would have taken notice.
C# has the advantage of being a typed language, which allows compilers and IDEs to warn in the circumstances I mentioned. JavaScript isn't a typed language, which limits the potential for such warnings.
Anyway, I didn't say it was "inferior to defer", I said that it seemed more error-prone than RAII in languages like Rust and C++.
Edit: Sorry if I'm horribly wrong (I don't use C#) but the relevant code analysis rules look like CA2000 and CA2213.
> Anyway, I didn't say it was "inferior to defer", I said that it seemed more error-prone than RAII in languages like Rust and C++.
It is, but RAII really isn't an option if you have an advanced GC, as it is lifetime-based and requires deterministic destruction of individual objects, and much of the performance of an advanced GC comes from not doing that.
It’s still difficult to get right in cases where you hold a disposable as a member. Its not obvious if disposables passed in also get disposed and what’s right depends on the situation (think a string based TextWriter getting passed in a byte-based Stream) and you will need to handle double disposes.
Further C# has destructors that get used as a last resort effort on native resources like file descriptors.
> Further C# has destructors that get used as a last resort effort on native resources like file descriptors.
True, I was going to mention that, but I saw that JS also has "finalization registries", which seem to provide finalizer support in JS, so I figured it wasn't a fundamental difference.
The problem they are trying to solve is that the programmer could forget to wrap an object creation with try. But their solution is just kicking the can down the road, because now the programmer could forget to write "using"!
I was thinking that a much better solution would be to simply add a no-op default implementation of dispose(), and call it whenever any object hits end-of-scope with refcount=1, and drop the "using" keyword entirely, since that way programmers couldn't forget to write "using". But then I remembered that JavaScript doesn't have refcounts, and we can't assume that function calls to which the object has been passed have not kept references to it, expecting it to still exist in its undisposed state later.
OTOH, if there really is no "nice" solution to detecting this kind of "escape", it means that, under the new system, writing "using" must be dangerous -- it can lead to dispose() being called when some function call stored a reference to the object somewhere, expecting it to still exist in its undisposed state later.
I feel it doesn't make sense to conflate resource management with garbage collection. The cleanup actions here are more like releasing a lock, deleting temporary files, or closing a connection. This doesn't lead to a lack of safety. These resources already need to deal with these uninitialised states. For example, consider a lock management object. You shouldn't assume you have the lock just because you have a reference to the manager resource. It's totally normal to have objects that require some sort of initialization.
Another point there is that JS has always gone to great lengths not to expose the GC in any way. For example, you can’t enumerate a WeakSet, because that would cause behavior to be GC dependent. Calling dispose when an object is collected would very explicitly cause the GC to have semantic effects, and I think that goes strongly against the JS philosophy.
Yes, it and WeakRef are exceptions, but they are the only ones, designed to be deniable – if you delete globalThis.WeakRef; and delete globalThis.FinalizationRegistry; you go back to not exposing GC at all. WeakRef even has a special exception in the spec in that the .constructor property is optional, specifically so that handing a weak reference to some code does not necessarily enable it to create more weak references, so you can be also limited as to which objects' GC you can observe.
Though another problem is that the spec does not clearly specify when an object may be collected or allow the programmer to control GC in any way, which means relying on FinalizationRegistry may lead to leaks/failure to finalize unused resources (bad, but sometimes tolerable) or worse, use-after-free bugs (outright fatal) – see e.g. https://github.com/tc39/ecma262/issues/2650
Finalizers aren’t destructors. The finalizer doesn’t get access to the object being GC’d, for one. But even more crucially, the spec allows the engine to call your finalizer anywhere between long after the object has been GC’d, and never.
They’re basically a nice convenience for noncritical resource cleanup. You can’t rely on them.
I mean it’s an explicit violation of that philosophy as noted in the proposal:
> For this reason, the W3C TAG Design Principles recommend against creating APIs that expose garbage collection. It's best if WeakRef objects and FinalizationRegistry objects are used as a way to avoid excess memory usage, or as a backstop against certain bugs, rather than as a normal way to clean up external resources or observe what's allocated.
Fair, I wasn’t aware of that. But even so, there’s a big difference between a wonky feature intended for niche cases and documented almost entirely in terms of caveats, and “this is the new way to dispose of resources”.
And the point that this kind of thing is against the JS philosophy is pretty explicit:
Need to dig into this more, but I built OneJS [1] (kinda like React Native but for Unity), and at first glance this looks perfect for us(?). Seems to be super handy for Unity where you've got meshes, RenderTextures, ComputeBuffers, and NativeContainers allocations that all need proper disposal outside of JS. Forcing disposal at lexical scopes, we can probs keep memory more stable during long Editor sessions or when hot-reloading a lot.
Only when you have an object that implements [Symbol.dispose]. If you don't, then you need to create one (like the wrapper in the example from the article) or bang out some boilerplate to explicitly make and use a DisposableStack().
So with using there's a little collection of language features to learn and use, and (probably more importantly), either app devs and library devs have to get on the same page with this at the same time, or app devs have to add a handful of boilerplate at each call site for wrappers or DisposableStacks.
`using` is mostly more convenient, because it registers cleanup without needing extra calls, unlike `defer`.
And of course you can trivially bridge callbacks, either by wrapping a function in a disposeable literal or by using the DisposableStack/AsyncDisposableStack utility types which the proposal also adds.
First, I had to refresh my memory on the new object definition shorthand: In short, you can use a variable or expression to define a key name by using brackets, like: let key = "foo"; { [key]: "bar"}, and secondly you don't have to write { "baz" : function(p) { ... } }, you can instead write { baz(p) {...} }. OK, got it.
So, if I'm looking at the above example correctly, they're implementing what is essentially an Interface-based definition of a new "resource" object. (If it walks like a duck, and quacks...)
To make a "resource", you'll tack on a new magical method to your POJO, identified not with a standard name (like Object.constructor() or Object.__proto__), but with a name that is a result of whatever "Symbol.dispose" evaluates to. Thus the above definition of { [Symbol.dispose]() {...} }, which apparently the "using" keyword will call when the object goes out of scope.
Do I understand that all correctly?
I'd think the proper JavaScript way to do this would be to either make a new object specific modifier keyword like the way getters and setters work, or to create a new global object named "Resource" which has the needed method prototypes that can be overwritten.
Using Symbol is just weird. Disposing a resource has nothing to do with Symbol's core purpose of creating unique identifiers. Plus it looks fugly and is definitely confusing.
Is there another example of an arbitrary method name being called by a keyword? It's not a function parameter like async/await uses to return a Promise, it's just a random method tacked on to an Object using a Symbol to define the name of it. Weird!
JS has used "well-known symbols"[1] to allow extending / overriding the functionality of objects for about 10 years. For example, an object is an iterable if it has a `[Symbol.iterator]` property. Symbols are valid object keys; they are not just string aliases.
Symbols are a very safe way to introduce new "protocols" in either the language standard or for application code. This is because Symbol can never conflict with existing class definitions. If we use a string name for the method, then existing code semantics change.
Here are the well-known symbols that my NodeJS 22 offers when I `Symbol.<tab>`:
> identified not with a standard name (like Object.constructor() or Object.__proto__)
__proto__ was a terrible mistake. Google “prototype pollution”; there are too many examples to link. In a duck-typed language where the main mechanism for data deserialization is JSON.parse(), you can’t trust the value of any plain string key.
> To make a "resource", you'll tack on a new magical method to your POJO, identified not with a standard name [...] nothing to do with Symbol's core purpose of creating unique identifiers.
The core purpose and original reason why Symbol was introduced in JS is the ability to create non-conflicting but well known / standard names, because the language had originally reserved no namespace for such and thus there was no way to know any name would be available (and not already monkey patched onto existing types, including native types).
> Is there another example of an arbitrary method name being called by a keyword? It's not a function parameter like async/await uses to return a Promise, it's just a random method tacked on to an Object using a Symbol to define the name of it. Weird!
`Symbol.iterator` called by `for...of` is literally the original use case for symbols.
> I'd think the proper JavaScript way to do this would be to either make a new object specific modifier keyword like the way getters and setters work, or to create a new global object named "Resource" which has the needed method prototypes that can be overwritten.
I hadn't considered how blessed I was to have __enter__ / __exit__ in Python for context managers and the more general Protocol concept that can be used for anything because lordy that definition looks ugly as sin. Even Perl looks on in horror of the sins JS has committed.
The more this stuff gets introduced the more I’m convinced to use Rust everywhere. I’m not saying this is the Rust way - it’s actually reminiscent of Python/C#. But, Rust does it better.
If we keep going down these roads, Rust actually becomes the simpler language as it was designed with all of these goals instead of shoe-horning them back in.
I think JavaScript should remain simple. If we really need this functionality we can bring in defer. But as a 1:1 with what is in golang. This in between of python and golang is too much for what JavaScript is supposed to be.
I definitely think that the web needs a second language with types, resource management and all sorts of structural guard rails. But continuing to hack into JavaScript is not it.
It depends on what language the Javascript engine is implemented in. For v8 that's c++ yeah. I would agree with Google being a super villain nowadays, but others use c++ too so I would think it's unfair to call it supervillain language...
Resource scoping is important feature. Context managers (in python) are literally bread and butter for everyday tasks.
It's awkward not because of Symbol, it introduces new syntax tied to existing implicit scopes. It's kinda fragile based on Go experience. Explicit scoping is a way more predictable.
When I discovered this feature, I looked everywhere in my codebases for a place to use it. Turns out most JS APIs, whether Web or Node.js, just don't need it, since they auto-close your resources for you. The few times I did call .close() used callbacks and would have been less clean/intuitive/correct to rewrite as scoped. I haven't yet been able to clean up even one line of code with this feature :(
I'm just a hobbyist and have some scriplets I've written to "improve" things on various websites. As part of that I needed an uninstall/undo feature - so with my "bush league" code I would do `window.mything = {something}`, and based upon previous dabblings with the likes of python/c#/go, I presumed I would be able to do `delete window.mything` and it would auto-magically call a function I would have written to do the work. So the new `[Symbol.dispose]()` feature/function would have done what I was looking for - but really, it;s not a big deal, because all that I actually had to do was write an interface spec'ing `{}.remove()` method, and call it where needed.
(This paragraph is getting off topic, but still... ) Below is my exact interface that I have in a .d.ts file. The reason for that file is because I like typed languages (ie TypeScript), but I don't want to install stuff like node-js for such simple things. So I realised vscode can/will check js files as ts on-the-go, so in a few spots (like this) I needed to "type" something - and then I found some posts about svelte source code using js-docs to type their code-base instead of typescript. So that's basically what I've done here...
So chances are that in the places you could use this feature, you've probably already got an "interface" for closing things when done (even if you haven't defined the interface in a type system).
If you’re using the withResource() pattern, you’re already effectively doing this, so yeah. If you’re using try/finally, it might be worth taking a second look.
This looks most similar to golang’s defer. It runs cleanup code when leaving the current scope.
It differs from try/finally, c# “using,” and Java try-with-resources in that it doesn’t require the to-be-disposed object to be declared at the start of the scope (although doing so arguably makes code easier to understand).
It differs from some sort of destructor in that the dispose call is tied to scope, not object lifecycle. Objects may outlive the scope if there are other references, and so these are different.
If you like golang’s defer then you might like this.
> This looks most similar to golang’s defer. It runs cleanup code when leaving the current scope.
It's nothing like go's defer: Go's defer is function-scoped and registers a callback, using is block-scoped and registers an object with a well defined protocol.
This can also be seen from the proposal itself (https://github.com/tc39/proposal-explicit-resource-managemen...) which cites C#'s using statement and declaration, Java's try-with-resource, and Python's context managers as prior art, but only mentions Go's defer as something you can emulate via DisposableStack and AsyncDisposableStack (types which are specifically inspired by Python's ExitStack),
This proposal reeks of "What color is your function?" https://journal.stuffwithstuff.com/2015/02/01/what-color-is-... . The distinction between sync functions and async functions keeps intruding into every feature. As we can see here, there are Symbol.dispose and Symbol.asyncDispose, and DisposableStack and AsyncDisposableStack.
I am so glad that Java decided to go down the path of virtual threads (JEP 444, JDK 21, Sep 2023). They decided to put some complexity into the JVM in order to spare application developers, library writers, and human debuggers from even more complexity.
I disagree. Hiding async makes reasoning about code harder and not easier. I want to know whether disposal is async and potentially affected by network outages, etc.
This is because normal execution and async functions form distinct closed Cartesian categories in which the normal execution category is directly embeddable in the async one.
All functions have color (i.e. particular categories in which they can be expressed) but only some languages make it explicit. It's a language design choice, but categories are extremely powerful and applicable beyond just threading. Plus, Java and thread based approaches have to deal with synchronization which is ... Difficult.
(JavaScript restricts itself to monadic categories and more specifically to those expressible via call with continuation essentially)
Beyond happy Java made that decision as well.
In plain javascript it's not a problem. Types are ducked so if you receive a result or a promise it doesn't matter. You can functionally work around the "color problem" using this dynamism.
It's only when you do something wacky like try to add a whole type system to a fully duck typed language that you run into problems with this. Or if you make the mistake of copying this async/await mechanism and then hamfistedly shove it into a compiled language.
Oh my word.
That only just occurred to me. To everybody else who finds it completely obvious, "well done" but it seemed worthy of mention nonetheless.Note that depending on use case, it may be preferrable to use `DisposableStack` and `AsyncDisposableStack` which are part of the `using` proposal and have built-in support for callback registration.
This is notably necessary for scope-bridging and conditional registration as `using` is block-scoped so
But because `using` is a variant of `const` which requires an initialisation value which it registers immediately this will fail: and so will this: Instead, you'd write: Similarly if you want to acquire a resource in a block (conditionally or not) but want the cleanup to happen at the function level, you'd create a stack at the function toplevel then add your disposables or callbacks to it as you go.Just like golang. Nice.
This is a great idea, but:
> Integration of [Symbol.dispose] and [Symbol.asyncDispose] in web APIs like streams may happen in the future, so developers do not have to write the manual wrapper object.
So for the foreseeable future, you have a situation where some APIs and libraries support the feature, but others - the majority - don't.
So you can either write your code as a complicated mix of "using" directives and try/catch blocks - or you can just ignore the feature and use try/catch for everything, which will result in code that is far easier to understand.
I fear this feature has a high risk of getting a "not practically usable" reputation (because right now that's what it is) which will be difficult to undo even when the feature eventually has enough support to be usable.
Which would be a real shame, as it does solve a real problem and the design itself looks well thought out.
This is the situation in JavaScript for the last 15 years: new language features come first to compilers like Babel, then to the language spec, and then finally are adopted for stable APIs in conservative NPM packages and in the browser. The process from "it shows up as a compiler plugin" to "it's adopted by some browser API" can often be like 3-4 years; and even after it's available in "evergreen" browsers, you still need to have either polyfills or a few more years of waiting for it to be guaranteed available on older end-user devices.
Developers are quite used to writing small wrappers around web APIs anyways since improvement to them comes very slowly, and a small wrapper is often a lesser evil compared to polyfills; or the browser API is just annoying on the typical use path so of course you want something a little different.
At least, I personally have never seen a new langauge feature that seems useful and thought to myself "wow this is going to be hard to use"
In practice, a lot of stuff has already implemented this using forwards-compatible polyfills. Most of the backend NodeJS ecosystem, for example, already supports a lot of this, and you have been able to use this feature quite effectively for some time (with a transpiler to handle the syntax). In fact, I gave a couple of talks about this feature last year, and while researching for them, I was amazed by how many APIs in NodeJS itself or in common libraries already supported Symbol.dispose, even if the `using` syntax wasn't implemented anywhere.
I suspect it's going to be less common in frontend code, because frontend code normally has its own lifecycle/cleanup management systems, but I can imagine it still being useful in a few places. I'd also like to see a few more testing libraries implement these symbols. But I suspect, due to the prevalence of support in backend code, that will all come with time.
This is why TC39 needs to work on fundamental language features like protocols. In Rust, you can define a new trait and impl it for existing types. This still has flaws (orphan rule prevents issues but causes bloat) but it would definitely be easier in a dynamic language with unique symbol capabilies to still come up with something.
Dynamic languages don't need protocols. If you want to make an existing object "conform to AsyncDisposable", you:
Or if you want to ensure all ImageBitmap conform to Disposable: But this does leak the "trait conformance" globally; it's unsafe because we don't know if some other code wants their implementation of dispose injected to this class, if we're fighting, if some key iteration is going to get confused, etc...How would a protocol work here? To say something like "oh in this file or scope, `ImageBitmap.prototype[Symbol.dispose]` should be value `x` - but it should be the usual `undefined` outside this scope"?
Isn't disconnecting a resize observer a poor example of this feature?
I couldn't come up with a reasonable one off the top of my head, but it's for illustration - please swap in a better web api in your mind
(edit: changed to ImageBitmap)
Isn't this typically solved with polyfills in the JavaScript world?
I regularly add Symbol based features to JS libraries I'm using (named methods are riskier, of course)
I have not blown my foot off yet with this approach but, uh, no warranty, express or implied.It's been working excellently for me so far though.
Much nicer than just adding your symbol method to the original class. :p
> So for the foreseeable future, you have a situation where some APIs and libraries support the feature, but others - the majority - don't.
Welcome to the web. This has pretty much been the case since JavaScript 1.1 created the situation where existing code used shims for things we wanted, and newer code didn't because it had become part of the language.
I wrote an article with more examples https://waspdev.com/articles/2025-05-17/js-destructors-or-ex... . It's actually a simplified version of this https://github.com/tc39/proposal-explicit-resource-managemen... .
I understand that JavaScript needs to maintain backwards compatibility, but the syntax
[Symbol.dispose]()
is very weird in my eyes. This looks like an array which is called like a function and the array contains a method-handle.
What is this syntax called? I would like to learn more about it.
Dynamic keys (square brackets on the left hand side in an object literal) have been around for nearly 10 years, if memory serves.
https://www.samanthaming.com/tidbits/37-dynamic-property-nam...
Also in the example is method shorthand:
https://www.samanthaming.com/tidbits/5-concise-method-syntax...
Since symbols cannot be referred to by strings, you can combine the two.
Basically, there isn't any new syntax here.
Yes, this will be familiar to people creating objects or classes that are intended to represent iterable collections. You do the same dynamic key syntax with a class declaration or object literal, but use `Symbol.iterator` as the well-known symbol for the method.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Someone more knowledgeable will join in soon, but I'm pretty sure it was derived from:
so it makes a lot of sense.Other posters correctly described _what_ this is, but I didn't see anyone answer _why_.
Using a Symbol as the method name disambiguates this method from any previously-defined methods.
In other words, by using a Symbol for the method name (and not using a string), it's impossible to "name collide" on this new API, which would accidentally mark a class as disposable.
Dynamic property access perhaps?
The premise is that you can always access an object's properties using indexing syntax as well as the normal dot syntax. So `object.foo` is the equivalent of `object["foo"]` or `object["f" + "o" + "o"]` (because the value inside the square brackets can be any expression). And if `object.foo` is a method, you can do `object.foo()` or `object ["foo"]()` or whatever else as well.
Normally, the key expression will always be coerced to a string, so if you did `object[2]`, this would be the equivalent of object["2"]. But there is an exception for symbols, which are a kind of unique object that is always compared by reference. Symbols can be used as keys just as they are, so if you do something like
You should see in the console that this object has a special key that is a symbol, as well as the normal "foo" attribute.The last piece of the puzzle is that there are certain "well known symbols" that are mostly used for extending an object's behaviour, a bit like __dunder__ methods in Python. Symbol.dispose is one of these - it's a symbol that is globally accessible and always means the same thing, and can be used to define some new functionality without breaking backwards compatibility.
I hope that helps, feel free to ask more questions.
It’s not that, it’s a dynamic key in an object literal.
That's also possible, and it's common when using this pattern, but the specific syntax in the original question was I believe property access, and not part of a property literal. I didn't bring that up because I thought my comment was long enough and I wanted to explain that specific syntax. But yeah, you also have this syntax to set properties in object literals, and a similar syntax in classes.
This syntax has been used for quite some time. JavaScript iterator use the same syntax and they been part of JavaScript for almost a decade now.
Object property access i guess. Like
myObj["myProperty"]
If it's a function then it could be invoked,
myObj["myProperty"]()
If the key was a symbol,
myObj[theSymbol]()
pretty sure they were asking about the dynamic property name, { [thing]: ... }
Reminds me of C#.. IDisposible and IAsyncDisposible in C# helps a lot to write good mechanisms for things that should actually be abstracted in a nice way (such as locks handling, queue mechanisms, temporary scopes for impersonation, etc).
That's because the author of the proposal is from Microsoft and has repeatedly shot down counter-suggestions that made the syntax look different from C#.
https://github.com/tc39/proposal-explicit-resource-managemen...
https://github.com/tc39/proposal-explicit-resource-managemen...
https://github.com/tc39/proposal-explicit-resource-managemen...
https://github.com/tc39/proposal-explicit-resource-managemen...
That looks like a lot of very reasonable responses to me.
It's basically lifted from C#'s, the original proposal makes no secret of it and cites all of Python's context managers, Java's try with resources, C#'s using statements, and C#'s using declarations. And `using` being the keyword and `dispose` the hook method is a pretty big hint.
Resource management, especially when lexical scoping is a feature, is why some of us have been working on bringing structured concurrency to JS: https://bower.sh/why-structured-concurrency
Library that leverages structured concurrency: https://frontside.com/effection
If you want to play with this, Bun 1.0.23+ seems to already have support: https://github.com/oven-sh/bun/discussions/4325
I don't understand how somebody can code like this and reason/control anything about the program execution :)
async (() => (e) { try { await doSomething(); while (!done) { ({ done, value } = await reader.read()); } promise .then(goodA, badA) .then(goodB, badB) .catch((err) => { console.error(err); } catch { } finally { using stack = new DisposableStack(); stack.defer(() => console.log("done.")); } });
That's the neat part, you don't. 90% percent of webdev is "upgrading" things in ways nobody asked for or appreciates because it's just taken for granted that your codebase will grow mold or something if it isn't stirred often enough and the other 10% of the work is fixing legitimate problems resulting from the first 90%. Of course, no probability is ever actually 1.0 so there will be rare occasions that you need to understand something that ChatGP-err, sorry my bad, i meant to say "something that you" wrote more than a year ago you suggest to your boss that this bug should be preserved until next time there's a new hire because it would make a great "jumping-on" point and until then the users will still be able to get work done by using the recommended work-around, whic his installing windows XP Pirate Edition onto a VM and using IE6 to get into the legacy-portal that somehow inexplicably still exists 20 years after the corporate merger that was supposed to make it obsolete.
Oh, we must upgrade, because of vulnerabilities. All the vulnerabilities found in 90% of this moot code.
Ok, point taken.
You wrote out loud what I've been thinking quietly.
From the lack of punctuation I think you can also rap it out loud.
Your paragraph is as complicated as the code we create over time. Is this your point? Then I take it.
For starters, your code is so full of serious syntax errors that in some places it's not even close to valid JavaScript. This is my best guess reconstruction:
But more importantly, this isn't even close to anything a reasonable JS dev would ever write.1. It's not typical to mix await and while(!done), I can't imagine what library actually needs this. You usually use one or the other, and it's almost always just await:
2. If you're already inside an Async IIFE, you don't need promise chains. Just await the stuff as needed, unless promise chains make the code shorter and cleaner, e.g.: 3. Well designed JS libraries don't usually stack promise handlers like the {good,bad}{A,B} functions you implied. You usually just write code and have a top level exception handler: 4. We don't usually need AIIFEs anymore, so the outer layer can just go away.By removing the semicolons you made it worse.
That is a matter of opinion. JavaScript allows you to use either convention at your preference. Personally, I feel my code looks much, much cleaner without semicolons. I also use whitespace liberally.
For the longest time, I used them just in case it would otherwise cause a bug. But TypeScript fully takes this into account and checks for all these scenarios.
note about that await block: "await" will await the _entire_ return, so if "promise" returns another promise ("goodA") which in turn also returns a promise ("goodB"), which in turn returns _another_ promise that ends up resolving as the non-promise value "goodC", then "await promise" just... gets you "goodC", directly.
The "example code" (if we can call it that) just used goodA and goodB because it tried to make things look crazy, by writing complete nonsense: none of that is necessary, we can just use a single, awaiting return:
Done. "await" waits until whatever it's working with is no longer a promise, automatically either resolving the entire chain, or if the chain throws, moving us over to the exception catching part of our code.By programming in the language for a living and being familiar with the semantics of the language's keywords -- likely the same way anyone else understands their preferred language?
People write Haskell for a living, after all.
And Lisp, and Forth... :D
Lisp I can understand to some degree... but writing Forth for a living? I know about 20-30 languages, but that one is Greek to me.
Stack-based or concatenative languages can be difficult to understand, but as with anything, you may / could get used to it. :)
I prefer Factor[1] over Forth, however. Maybe you'll like it!
[1] https://factorcode.org/
It's not that they're hard to understand, it's that they're much denser. From Factor's examples page:
> 2 3 + 4 * .
There's a lot more there to mentally parse than:
> (2 + 3) * 4
It's the same as when Rob Pike decries syntax highlighting. No, it's very useful to me. I can read much quicker with it.
It's the same principle behind how we use heuristics to much more quickly read words by sipmly looking at the begninnings and ends of each word, and most of the time don't even notice typos.
Well, I guess it might boil down to how one "thinks"?
Some people prefer:
Some other people prefer: And some other people prefer: I personally find the last one easier to read or understand, but I have had my fair share of Common Lisp and Factor. :DSyntax highlighting is useful for many people, including me. I can read much quicker with it, too. I know of some people who write Common Lisp without syntax highlighting though. :)
To embed code on HN, add 2 or more spaces at the beginning of each line:
(indentation preserved as posted by OP – I don't understand how somebody can code like this either :-)Indenting helps.
Also, sticking to one style and not mixing all the wildly different approaches to do the same thing.
JS, like HTML has the special property that you effectively cannot make backwards-incompatible changes ever, because that scrappy webshop or router UI that was last updated in the 90s still has to work.
But this means that the language is more like an archeological site with different layers of ruins and a modern city built on top of it. Don't use all the features only because they are available.
There used to be a great book for this, "JavaScript The Good Parts". Is there a well-respected equivalent for JavaScript in 2025?
Also practice, programming is hard, but just because one person doesn't understand something, doesn't mean it's impossible or a bad idea.
But browsing the web with dev tools open, the amount of error messages on allmost any site implies to me, it is more than one person who doesn't understand something.
It's also great for job security if very few people would be able to work on it.
you can write horrid code intentionally in any programming language
It just seems like it's happening way more often in JavaScript, but I've seen absolute horrid and confusing Python as well.
The JavaScript syntax wasn't great to begin with, and as features are added to the language it sort of has to happen within the context of what's possible. It's also becoming a fairly large language, one without a standard library, so things just sort of hang out in a global namespace. It's honestly not to dissimilar to PHP, where the language just grew more and more functions.
As others point out there's also some resemblance to C#. The problem is that parts of the more modern C# is also a confusing mess, unless you're a seasoned C# developer. The new syntax features aren't bad, and developers are obviously going to use them to implement all sorts of things, but if you're new to the language they feel like magical incantations. They are harder to read, harder to follow and doesn't look like anything you know from other language. Nor are they simple enough that you can just sort of accept them and just type the magical number of brackets and silly characters and accept that it somehow work. You frequently have no idea of what you just did or why something works.
I feel like Javascript has reached the point where it's a living language, but because of it's initial implementation and inherit limits, all these great features feel misplaced, bolted on and provides an obstacle for new or less experienced developers. Javascript has become an enterprise language, with all the negative consequences and baggage that entails. It's great that we're not stuck with half a language and we can do more modern stuff, it just means that we can't expect people to easily pick up the language anymore.
> parts of the more modern C# is also a confusing mess
Do you have any examples?
For me, personally, heavy use of the => operator (which happens to coinside with my main complaint of a lot JavaScript code and anonymous functions). You can avoid it, but is pretty standard.
Very specifically I also looking into JWT authentication in ASP.NET Core and found the whole thing really tricky to wrap my head around. That's more of a library, but I think many of the usage examples ends up being a bunch of spaghetti code.
No worse than C++, frankly.
LLMs will do only this and you'll love it.
Maybe not love it, but you really won't have a choice.
It all starts with being well-formatted and having a proper code editor instead of just a textarea on a webpage, so you'd get the many error notices for that code (because it sure as hell isn't valid JS =)
And of course, actually knowing the language you use every minute of the day because that's your job helps, too, so you know to rewrite that nonsense to something normal. Because mixing async/await and .then.catch is ridiculous, and that while loop should never be anywhere near a real code base unless you want to get yelled at for landing code that seems intentionally written to go into a spin loop under not-even-remotely unusual circumstances.
I mean we're talking about a language community where someone created a package to tell if a variable is a number... and it gets used *a lot*.
That JavaScript has progressed so much in some ways and yet is still missing basic things like parameter types is crazy to me.
The overwhelming majority of serious work in JS is authored in TypeScript.
That is a ”no true Scotsman” argument.
How so? GP complained about JS lack of types. I pointed out that most JS actually benefits from types, given it's typically authored in TS. No moving goalposts, no "true Scotsman" arg.
That just Sounds like an even stronger argument to add types to the language.
Could be.
Someone needs to start creating leftPad and isOdd type troll packages in Rust just so we can ridicule the hubris.
Can someone explain why they didn’t go with (anonymous) class destructors? Or something other than a Symbol as special object key. Especially when there are two Symbols (different one for asynchronous) which makes it a leaky abstraction, no?
Destructors require deterministic cleanup, which advanced GCs can't do (and really don't want to either from an efficiency perspective). Languages with advanced GCs have "finalizers" called during collection which are thus extremely unreliable (and full of subtle footguns), and are normally only used as a last resort solution for native resources (FFI wrappers).
Hence many either had or ended up growing means of lexical (scope-based) resource cleanup whether,
- HoF-based (smalltalk, haskell, ruby)
- dedicated scope / value hook (python[1], C#, Java)
- callback registration (go, swift)
[1]: Python originally used destructors thanks to a refcounting GC, but the combination of alternate non-refcounted implementations, refcount cycles, and resources like locks not having guards (and not wanting to add those with no clear utility) led to the introduction of context managers
What does "HoF" stand for?
higher order function, function taking an other function (/ block).
E.g. in Ruby you can lock/unlock a mutex, but the normal way to do it would be to pass a block to `Mutex#synchronize` which is essentially just
and called as:Destructors I other languages are typically used for when the object is garbage collected. That has a whole bunch of associated issues, which is why the pattern is often avoided these days.
The dispose methods on the other hand are called when the variable goes out of scope, which is much more predictable. You can rely on for example a file being closed ot a lock released before your method returns.
JavaScript is already explicit about what is synchronous versus asynchronous everywhere else, and this is no exception. Your method needs to wait for disposing to complete, so if disposing is asynchronous, your method must be asynchronous as well. It does get a bit annoying though that you end up with a double await, as in `await using a = await b()` if you're not used to that syntax.
As for using symbols - that's the same as other functionality added over time, such as iterator. It gives a nice way for the support to be added in a backwards-compatible way. And it's mostly only library authors dealing with the symbols - a typical app developer never has to touch it directly.
For garbage collected languages destructors cannot be called synchronously in most cases because the VM must make sure that the object is inaccessible first. So it will not work very deterministically, and also will expose the JS VM internals. For that JS already has WeakRef and FinalizationRegistry.
https://waspdev.com/articles/2025-04-09/features-that-every-... https://waspdev.com/articles/2025-04-09/features-that-every-...
But even Mozilla doesn't recommend to use them because they're quite unpredictable and might work differently in different engines.
Because this approach also works for stuff that is not a class instance.
There is no such thing as an anonymous property in JavaScript. Your question doesn't make sense. What else could this possibly be?
Because javascript is uncivilized.
Their first example is about having to have a try/finally block in a function like this:
So that the read lock is lifted even if reader.read() throws an error.Does this only hold for long running processes? In a browser environment or in a cli script that terminates when an error is thrown, would the lock be lifted when the process exits?
The spec just says that when a block "completes" its execution, however that happens (normal completion, an exception, a break/continue statement, etc.) the disposal must run. This is the same for "using" as it is for "try/finally".
When a process is forcibly terminated, the behavior is inherently outside the scope of the ECMAScript specification, because at that point the interpreter cannot take any further actions.
So what happens depends on what kind of object you're talking about. The example in the article is talking about a "stream" from the web platform streams spec. A stream, in this sense, is a JS object that only exists within a JS interpreter. If the JS interpreter goes away, then it's meaningless to ask whether the lock is locked or unlocked, because the lock no longer exists.
If you were talking about some kind of OS-allocated resource (e.g. allocated memory or file descriptors), then there is generally some kind of OS-provided cleanup when a process terminates, no matter how the termination happens, even if the process itself takes no action. But of course the details are platform-specific.
Browser web pages are quintessential long running programs! At least for Notion, a browser tab typically lives much longer (days to weeks) than our server processes (hours until next deploy). They're an event loop like a server often with multiple subprocesses, very much not a run-to-completion CLI tool. And errors do not terminate a web page.
The order of execution for unhandled errors is well-defined. The error unwinds up the call stack running catch and finally blocks, and if gets back to the event loop, then it's often dispatched by the system to an "uncaught exception" (sync context) or "unhandled rejection" (async context) handler function. In NodeJS, the default error handler exits the process, but you can substitute your own behavior which is common for long-running servers.
All that is to say, that yes, this does work since termination handler is called at the top of the stack, after the stack unwinds through the finally blocks.
I just wrote a blog post (https://morsecodist.io/blog/typescript-resource-management) about this feature. I love it and I feel it still hasn't caught on in the ecosystem.
This is very useful for resource management of WASM types which might have different memory backing.
Yeah, great for that use-case - memory management; it's great to get the DisposeStack that allows "moving" out of the current scope too, that's handy.
I adopted it for quickjs-emscripten (my quickjs in wasm thingy for untrusted code in the browser) but found that differing implementations between the TypeScript compiler and Babel lead to it not being reliably usable for my consumers. I ended up writing this code to try to work around the polyfill issues; my compiler will use Symbol.for('Symbol.dispose'), but other compilers may choose a different symbol...
https://github.com/justjake/quickjs-emscripten/blob/aa48b619...
Is it the same as try with resources in Java?
It's similar, but more inspired by C#'s "using declaration", an evolution of the using blocks, which are the C# version of try-with-resource: `using` declarations don't introduce their own block / scope.
The original proposal references all of Python's context manager, Java's try-with-resource, and C#'s using statement and declaration: https://github.com/tc39/proposal-explicit-resource-managemen...
Yeah, as someone else has pointed out it's C# inspired, this is a C# example:
You can still do the wrap if you need more fine grained control, or do anything else in the finally.You can even nest them like this:
Edit: hadn't read the whole article, the javascript version is pretty good!Unsure if this is inspired from C++ RAII. RAII looks very elegant.
`[Symbol.dispose]()` threw me off
> Unsure if this is inspired from C++ RAII.
Not really. Both are ways to perform deterministic resource management, but RAII is a branch of deterministic resource management which most GC'd languages can not use as they don't have deterministic object lifetimes.
This is inspired by similar constructs in Java, C#, and Python (and in fact lifted from C# with some adaptation to JS's capabilities), and insofar as those were related to RAII, they were a step away from it, at least when it comes to Python: CPython historically did its resource management using destructors which would mostly be reliably and deterministically called on refcount falling to zero.
However,
1. this was an issue for non-refcounted alternative implementations of Python
2. this was an issue for the possibility of an eventual (if unlikely) move away from refcounting in CPython
3. destructors interact in awkward ways with reference cycles
4. even in a reference-counted language, destructors share common finaliser issues like object resurrection
Thus Python ended up introducing context managers as a means of deterministic resource management, and issuing guidance to avoid relying on refcounting and RAII style management.
I tried to write a `using` utility for JS a few years ago: https://gist.github.com/davidmurdoch/dc37781b0200a2892577363...
It's not very ergonomic so I never tried to use it anywhere.
The error was probably trying to write a generic `using`. In my experience languages which use higher order functions or macros for scope cleanup tend to build high-level utilities directly onto the lowest level features, it can be a bit repetitive but usually not too bad.
So in this case, rather than a generic `using` built on the even more generic `try/except` you should probably have built a `withFile` callback. It's a bit more repetitive, but because you know exactly what you're working with it's a lot less error prone, and you don't need to hope there's a ready made protocol.
It also provides the opportunity of upgrading the entire thing e.g. because `withFile` would be specialised for file interaction it would be able to wrap all file operations as promise-based methods instead of having to mix promises and legacy callbacks.
Sure, the with* pattern is fine. I was just playing with the idea of C#s disposable pattern in JS.
This seems error-prone, for at least two reasons:
* If you accidentally use `let` or `const` instead of `using`, everything will work but silently leak resources.
* Objects that contain resources need to manually define `dispose` and call it on their children. Forgetting to do so will lead to resource leaks.
It looks like defer dressed up to resemble RAII.
Here’s some relevant discussion about some of the footguns:
https://github.com/typescript-eslint/typescript-eslint/issue...
https://github.com/tc39/proposal-explicit-resource-managemen...
I imagine there will eventually be lint rules for this somewhere and many of those using such a modern feature are likely to be using static analysis via eslint to help mitigate the risks here, but until it’s more established and understood and lint rules are fleshed out and widely adopted, there is risk here for sure.
https://github.com/typescript-eslint/typescript-eslint/issue...
To me it seems a bit like popular lint libraries just going ahead and adding the rule would make a big difference here
What you describe is already the status quo today. This proposal is still a big improvement as it makes resource management less error prone when you're aware to use it and _standardizes the mechanism through the symbol_. This enables tooling to lint for the situations you're describing based on type information.
There is pretty strong precedent for this design over in .NET land - if it was awful or notably inferior to `defer` I'm sure the Chrome engineering team would have taken notice.
C# has the advantage of being a typed language, which allows compilers and IDEs to warn in the circumstances I mentioned. JavaScript isn't a typed language, which limits the potential for such warnings.
Anyway, I didn't say it was "inferior to defer", I said that it seemed more error-prone than RAII in languages like Rust and C++.
Edit: Sorry if I'm horribly wrong (I don't use C#) but the relevant code analysis rules look like CA2000 and CA2213.
> Anyway, I didn't say it was "inferior to defer", I said that it seemed more error-prone than RAII in languages like Rust and C++.
It is, but RAII really isn't an option if you have an advanced GC, as it is lifetime-based and requires deterministic destruction of individual objects, and much of the performance of an advanced GC comes from not doing that.
Most GC'd language have some sort of finalizers (so does javascript: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...) but those are unreliable and often have subtle footguns when used for cleanup.
It’s still difficult to get right in cases where you hold a disposable as a member. Its not obvious if disposables passed in also get disposed and what’s right depends on the situation (think a string based TextWriter getting passed in a byte-based Stream) and you will need to handle double disposes.
Further C# has destructors that get used as a last resort effort on native resources like file descriptors.
> Further C# has destructors that get used as a last resort effort on native resources like file descriptors.
True, I was going to mention that, but I saw that JS also has "finalization registries", which seem to provide finalizer support in JS, so I figured it wasn't a fundamental difference.
Exactly this. No idea why you were downvoted.
The problem they are trying to solve is that the programmer could forget to wrap an object creation with try. But their solution is just kicking the can down the road, because now the programmer could forget to write "using"!
I was thinking that a much better solution would be to simply add a no-op default implementation of dispose(), and call it whenever any object hits end-of-scope with refcount=1, and drop the "using" keyword entirely, since that way programmers couldn't forget to write "using". But then I remembered that JavaScript doesn't have refcounts, and we can't assume that function calls to which the object has been passed have not kept references to it, expecting it to still exist in its undisposed state later.
OTOH, if there really is no "nice" solution to detecting this kind of "escape", it means that, under the new system, writing "using" must be dangerous -- it can lead to dispose() being called when some function call stored a reference to the object somewhere, expecting it to still exist in its undisposed state later.
I feel it doesn't make sense to conflate resource management with garbage collection. The cleanup actions here are more like releasing a lock, deleting temporary files, or closing a connection. This doesn't lead to a lack of safety. These resources already need to deal with these uninitialised states. For example, consider a lock management object. You shouldn't assume you have the lock just because you have a reference to the manager resource. It's totally normal to have objects that require some sort of initialization.
Another point there is that JS has always gone to great lengths not to expose the GC in any way. For example, you can’t enumerate a WeakSet, because that would cause behavior to be GC dependent. Calling dispose when an object is collected would very explicitly cause the GC to have semantic effects, and I think that goes strongly against the JS philosophy.
FinalizationRegistry was added, like, 5 years ago.
Yes, it and WeakRef are exceptions, but they are the only ones, designed to be deniable – if you delete globalThis.WeakRef; and delete globalThis.FinalizationRegistry; you go back to not exposing GC at all. WeakRef even has a special exception in the spec in that the .constructor property is optional, specifically so that handing a weak reference to some code does not necessarily enable it to create more weak references, so you can be also limited as to which objects' GC you can observe.
Though another problem is that the spec does not clearly specify when an object may be collected or allow the programmer to control GC in any way, which means relying on FinalizationRegistry may lead to leaks/failure to finalize unused resources (bad, but sometimes tolerable) or worse, use-after-free bugs (outright fatal) – see e.g. https://github.com/tc39/ecma262/issues/2650
Finalizers aren’t destructors. The finalizer doesn’t get access to the object being GC’d, for one. But even more crucially, the spec allows the engine to call your finalizer anywhere between long after the object has been GC’d, and never.
They’re basically a nice convenience for noncritical resource cleanup. You can’t rely on them.
Yes? Congratulation you know what a finalizer is?
I was replying to this:
> would very explicitly cause the GC to have semantic effects, and I think that goes strongly against the JS philosophy.
Do you disagree that a finalizer provides for exactly that and thus can not be "strongly against the JS philosophy"?
I mean it’s an explicit violation of that philosophy as noted in the proposal:
> For this reason, the W3C TAG Design Principles recommend against creating APIs that expose garbage collection. It's best if WeakRef objects and FinalizationRegistry objects are used as a way to avoid excess memory usage, or as a backstop against certain bugs, rather than as a normal way to clean up external resources or observe what's allocated.
Fair, I wasn’t aware of that. But even so, there’s a big difference between a wonky feature intended for niche cases and documented almost entirely in terms of caveats, and “this is the new way to dispose of resources”.
And the point that this kind of thing is against the JS philosophy is pretty explicit:
https://w3ctag.github.io/design-principles/#js-gc
Need to dig into this more, but I built OneJS [1] (kinda like React Native but for Unity), and at first glance this looks perfect for us(?). Seems to be super handy for Unity where you've got meshes, RenderTextures, ComputeBuffers, and NativeContainers allocations that all need proper disposal outside of JS. Forcing disposal at lexical scopes, we can probs keep memory more stable during long Editor sessions or when hot-reloading a lot.
[1] https://github.com/Singtaa/OneJS
Ngl, I was hoping “resources” was referring to memory.
Would be amazing to have a low-level borrow-checked subset of JS, as part of JS, so you can rewrite your hot loops in it.
Granted, you could also just import * from './low-level.wat' (or .c, and compile it automatically to WASM)
This is a great upcoming feature, I wrote some practical advice (a realistic example, how to use it with TypeScript/vite/eslint/neovim/etc…) about it a few months ago here: https://abstract.properties/explicit-resource-management-is-...
did this go through the TC39 or is this a V8 only feature?
https://github.com/tc39/proposal-explicit-resource-managemen...
thanks!
Bun seems to have it and it's not using V8.
I would have preferred "defer", but "using" is a lot better than nothing.
Using is more flexible, since it doesn't need a function call, but can simply assign a variable that implements [[dispose]]
Only when you have an object that implements [Symbol.dispose]. If you don't, then you need to create one (like the wrapper in the example from the article) or bang out some boilerplate to explicitly make and use a DisposableStack().
So with using there's a little collection of language features to learn and use, and (probably more importantly), either app devs and library devs have to get on the same page with this at the same time, or app devs have to add a handful of boilerplate at each call site for wrappers or DisposableStacks.
They're basically dual of one another.
`using` is mostly more convenient, because it registers cleanup without needing extra calls, unlike `defer`.
And of course you can trivially bridge callbacks, either by wrapping a function in a disposeable literal or by using the DisposableStack/AsyncDisposableStack utility types which the proposal also adds.
Maybe it's just me, but [Symbol.dispose]() seems like a really hacky way to add that functionality to an Object. Here's their example:
First, I had to refresh my memory on the new object definition shorthand: In short, you can use a variable or expression to define a key name by using brackets, like: let key = "foo"; { [key]: "bar"}, and secondly you don't have to write { "baz" : function(p) { ... } }, you can instead write { baz(p) {...} }. OK, got it.So, if I'm looking at the above example correctly, they're implementing what is essentially an Interface-based definition of a new "resource" object. (If it walks like a duck, and quacks...)
To make a "resource", you'll tack on a new magical method to your POJO, identified not with a standard name (like Object.constructor() or Object.__proto__), but with a name that is a result of whatever "Symbol.dispose" evaluates to. Thus the above definition of { [Symbol.dispose]() {...} }, which apparently the "using" keyword will call when the object goes out of scope.
Do I understand that all correctly?
I'd think the proper JavaScript way to do this would be to either make a new object specific modifier keyword like the way getters and setters work, or to create a new global object named "Resource" which has the needed method prototypes that can be overwritten.
Using Symbol is just weird. Disposing a resource has nothing to do with Symbol's core purpose of creating unique identifiers. Plus it looks fugly and is definitely confusing.
Is there another example of an arbitrary method name being called by a keyword? It's not a function parameter like async/await uses to return a Promise, it's just a random method tacked on to an Object using a Symbol to define the name of it. Weird!
Maybe I'm missing something.
JS has used "well-known symbols"[1] to allow extending / overriding the functionality of objects for about 10 years. For example, an object is an iterable if it has a `[Symbol.iterator]` property. Symbols are valid object keys; they are not just string aliases.
[1] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Symbols are a very safe way to introduce new "protocols" in either the language standard or for application code. This is because Symbol can never conflict with existing class definitions. If we use a string name for the method, then existing code semantics change.
Here are the well-known symbols that my NodeJS 22 offers when I `Symbol.<tab>`:
Yes you are missing something. You are not supposed to call these methods, they are for the runtime.
more specifically, javascript will call the [Symbol.dispose] when it detects you are exiting the scope of a "using" declaration.
> identified not with a standard name (like Object.constructor() or Object.__proto__)
__proto__ was a terrible mistake. Google “prototype pollution”; there are too many examples to link. In a duck-typed language where the main mechanism for data deserialization is JSON.parse(), you can’t trust the value of any plain string key.
> create a new global object named "Resource" which has the needed method prototypes that can be overwritten.
those methods could conflict with existing methods already used in other ways if you’d want to make an existing class a subclass of Resource.
> To make a "resource", you'll tack on a new magical method to your POJO, identified not with a standard name [...] nothing to do with Symbol's core purpose of creating unique identifiers.
The core purpose and original reason why Symbol was introduced in JS is the ability to create non-conflicting but well known / standard names, because the language had originally reserved no namespace for such and thus there was no way to know any name would be available (and not already monkey patched onto existing types, including native types).
> Is there another example of an arbitrary method name being called by a keyword? It's not a function parameter like async/await uses to return a Promise, it's just a random method tacked on to an Object using a Symbol to define the name of it. Weird!
`Symbol.iterator` called by `for...of` is literally the original use case for symbols.
> I'd think the proper JavaScript way to do this would be to either make a new object specific modifier keyword like the way getters and setters work, or to create a new global object named "Resource" which has the needed method prototypes that can be overwritten.
Genuinely: what are you talking about.
I hadn't considered how blessed I was to have __enter__ / __exit__ in Python for context managers and the more general Protocol concept that can be used for anything because lordy that definition looks ugly as sin. Even Perl looks on in horror of the sins JS has committed.
[dead]
I would like to petition JSLand to please let go of the word "use" and all of its derivatives. Cool feature though, looking forward to using (smh) it.
They're just adopting the same syntax that C# has used for a long time
Ok, but it doesn't make it any less meaningless.
async, await, let, var, const, try, catch, yield are all meaningful and precise keywords
"use" "using" on the other hand is not a precise word at all. To any non c# person it could be used to replace any of the above words!
JavaScript new features: segmentation faults, memory leaks, memory corruption and core dumps.
Nah, it still doesn’t let you allocate or free memory manually.
So… drop
The more this stuff gets introduced the more I’m convinced to use Rust everywhere. I’m not saying this is the Rust way - it’s actually reminiscent of Python/C#. But, Rust does it better.
If we keep going down these roads, Rust actually becomes the simpler language as it was designed with all of these goals instead of shoe-horning them back in.
Not sure I agree or maybe we do.
I think JavaScript should remain simple. If we really need this functionality we can bring in defer. But as a 1:1 with what is in golang. This in between of python and golang is too much for what JavaScript is supposed to be.
I definitely think that the web needs a second language with types, resource management and all sorts of structural guard rails. But continuing to hack into JavaScript is not it.
First it was "Why Do Animals Keep Evolving into Crabs?", now it's "Why Do Programming Languages Keep Evolving into Crabs?"
What do you mean by that? Is `drop` a language construct in another language?
Yes, it's called Drop in rust: https://doc.rust-lang.org/std/ops/trait.Drop.html
It's also in this comment, which reads like Gen Zed slang.
https://news.ycombinator.com/item?id=44012969
Drop?
it's an annoying usage because you never know whether it means "sth new appeared" or "sth old stopped being available"
New drop just dropped
Drop and biweekly
Still implemented with the super villain language, c++?
It depends on what language the Javascript engine is implemented in. For v8 that's c++ yeah. I would agree with Google being a super villain nowadays, but others use c++ too so I would think it's unfair to call it supervillain language...
Context managers: exist.
JS: drop but we couldn't occupy a possibly taken name, Symbol for the win!
It's hilariously awkward.
> JS: drop but we couldn't occupy a possibly taken name, Symbol for the win!
You're about a decade late to the party?
That is the entire point of symbols and "well known symbols", and why they were introduced back in ES6.
And I didn't use it because there was no need.
Resource scoping is important feature. Context managers (in python) are literally bread and butter for everyday tasks.
It's awkward not because of Symbol, it introduces new syntax tied to existing implicit scopes. It's kinda fragile based on Go experience. Explicit scoping is a way more predictable.
nah, Symbol has been traits for javascript for quite a while eg. Symbol.iterator
It's the "dispose" part where the new name is decided.
Traits in the way that a roller-skate is a car.
When I discovered this feature, I looked everywhere in my codebases for a place to use it. Turns out most JS APIs, whether Web or Node.js, just don't need it, since they auto-close your resources for you. The few times I did call .close() used callbacks and would have been less clean/intuitive/correct to rewrite as scoped. I haven't yet been able to clean up even one line of code with this feature :(
I'm just a hobbyist and have some scriplets I've written to "improve" things on various websites. As part of that I needed an uninstall/undo feature - so with my "bush league" code I would do `window.mything = {something}`, and based upon previous dabblings with the likes of python/c#/go, I presumed I would be able to do `delete window.mything` and it would auto-magically call a function I would have written to do the work. So the new `[Symbol.dispose]()` feature/function would have done what I was looking for - but really, it;s not a big deal, because all that I actually had to do was write an interface spec'ing `{}.remove()` method, and call it where needed.
(This paragraph is getting off topic, but still... ) Below is my exact interface that I have in a .d.ts file. The reason for that file is because I like typed languages (ie TypeScript), but I don't want to install stuff like node-js for such simple things. So I realised vscode can/will check js files as ts on-the-go, so in a few spots (like this) I needed to "type" something - and then I found some posts about svelte source code using js-docs to type their code-base instead of typescript. So that's basically what I've done here...
So chances are that in the places you could use this feature, you've probably already got an "interface" for closing things when done (even if you haven't defined the interface in a type system).If you’re using the withResource() pattern, you’re already effectively doing this, so yeah. If you’re using try/finally, it might be worth taking a second look.
It doesn't seem to be widely used yet. I used it to clean up temporary files for a loader that downloads them.
This looks most similar to golang’s defer. It runs cleanup code when leaving the current scope.
It differs from try/finally, c# “using,” and Java try-with-resources in that it doesn’t require the to-be-disposed object to be declared at the start of the scope (although doing so arguably makes code easier to understand).
It differs from some sort of destructor in that the dispose call is tied to scope, not object lifecycle. Objects may outlive the scope if there are other references, and so these are different.
If you like golang’s defer then you might like this.
> This looks most similar to golang’s defer. It runs cleanup code when leaving the current scope.
It's nothing like go's defer: Go's defer is function-scoped and registers a callback, using is block-scoped and registers an object with a well defined protocol.
> It differs from [...] c# “using,”
It's pretty much a direct copy of C#'s `using` declaration (as opposed to the using statement): https://learn.microsoft.com/en-us/dotnet/csharp/language-ref....
This can also be seen from the proposal itself (https://github.com/tc39/proposal-explicit-resource-managemen...) which cites C#'s using statement and declaration, Java's try-with-resource, and Python's context managers as prior art, but only mentions Go's defer as something you can emulate via DisposableStack and AsyncDisposableStack (types which are specifically inspired by Python's ExitStack),