Please ELI5 for me: How are AI agents different from traditional workflow engines, which orchestrated a set of tasks by interacting with both humans and other software systems?
Is this what will be tried to fix the potential fallout from continuously decreasing fertility rates (resulting in population decline, thus affecting the consumption-based economy)?
AI systems cannot be economic agents, in the sense of participating in a relevant sense in economic transactions. An economic transaction is an exchange between people with needs (, preferences, etc.) that can die -- and so, fundamentally, are engaged in exchanges of (productive) time via promising and meeting one's promises. Time is the underlying variable of all economics, and its what everything ends up in ratio to -- the marginal minute of life.
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
Replace "AI system" with "corporation" in the above and reread it.
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
I think your mistaking the philosophical basis of parents comments. Maybe a more succinct way to illustrate what I believe was their point is to say: "no matter how complex and productive the AI, it is still operating as a form of capital, not as a capitalist." Absent being tethered to a desire (for instance, via an owner), an AI has no function to optimize, and therefore, the most optimal cost is simply shutting off.
Except they don't really "think" and they are not conscious. Expecting your toaster or car to never rise up against you is a good strategy. AI models have more in common with a toaster than with a human being. Which is why they cannot be economic agents. Even if corporations profit off them, the corporation will be the economic agent, not the AI models.
> Time is the underlying variable of all economics
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Depending on how you feel about various theories of development, an argument that all of these categories reduce to time. At the very least, the relationship between labor, capital, and time seems pretty fundamental: labor cannot be instantaneous, capital grows over time, etc.
They can all be related on a philosophical level but in practice economists treat them as separate factors of production. It's land, labor, and capital classically. Technology/entrepreneurship can be seen as another factor, distinctly separate from labor.
I agree that time isn’t an input in the economic system.
Although, one can use either discrete or continuous time to simulate a complex economic system.
Only simple closed form models take time as in input, e.g. compounded interest or Black-Scholes.
Also, there are wide range of hourly rates/salaries, and not everyone compensated by time, some by cost-and-materials, others by value or performance (with or without risking their own funds/resources).
There are large scale agent-based model (ABM) simulations of the US economy, where you have an agent for every household and every firm.
Yeah, that’s well articulated and well reasoned. Unfortunately, so long as in some way these agents are able to make money for the owner the argument is totally moot. You cannot expect capitalists to think of anything other than profit in the next quarter or quarter after that
I think you may be missing out on the general idea of DAO's in general by restricting yourself to a few particular historical uses (and many a failed one at that) of DAOs, back from when agentic AI wasn't a thing.
The hackability of these things though, that still remains a very valid topic, as it is orthogonal to the fact that AI has arrived on the scene.
Well for starters if some incredible change to capitalism doesn't occur, we are going to have to come up with never before cooperative software tools for the general populace to assess and avoid the most egregious companies that stop hiring people.
Tools for: mass harassment campaigns against rich people/companies that don't support human life anymore, dynamically calculating the most damage you can do without crossing into illegal.
Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition. Tracking individual rich people and what new companies and decisions they are making doing ongoing damage, somehow recognize and categorize the trends of big tech to "do the same old illegal shit except through an app now" before the legal system can catch up.
Capitalism sure turns out be real fucking dumb if it can't even come up with proper market analysis tools for workers to have some kind of knowledge about where they can best leverage their skills, companies get away with breaking all the rules and create coercion hierarchies everywhere.
I hate to say (because the legal system has never worked ever) but the only workable future to me seems like forcing agents/robots to be tied to humans. If a company wants 100 robots, they must be paying a human for every robot they utilize somehow. Maybe a dynamic ratio somehow, like if the government decided most people are getting enough resources to survive, then maybe 2 robots per human payed.
“…the only workable future to me seems like forcing agents/robots to be tied to humans.”
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.
Hmm, where have I seen this before…
https://en.wikipedia.org/wiki/Accelerando
One of the most interesting things about the book is how it skewers the idea of there being a singular AI.
In Accelerando the VO are a species of trillions of AI beings that are sort of descended from us. They have a civilization of their own.
Amazing how forward looking that book was.
Please ELI5 for me: How are AI agents different from traditional workflow engines, which orchestrated a set of tasks by interacting with both humans and other software systems?
Traditional workflow is largely predefined & rule-based.
There’s a level of autonomy by the AI agents (it determines on its own the next step), that is not predefined.
Agreed though that there’s lots of similarities.
Is this what will be tried to fix the potential fallout from continuously decreasing fertility rates (resulting in population decline, thus affecting the consumption-based economy)?
AI systems cannot be economic agents, in the sense of participating in a relevant sense in economic transactions. An economic transaction is an exchange between people with needs (, preferences, etc.) that can die -- and so, fundamentally, are engaged in exchanges of (productive) time via promising and meeting one's promises. Time is the underlying variable of all economics, and its what everything ends up in ratio to -- the marginal minute of life.
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
Replace "AI system" with "corporation" in the above and reread it.
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...
I think your mistaking the philosophical basis of parents comments. Maybe a more succinct way to illustrate what I believe was their point is to say: "no matter how complex and productive the AI, it is still operating as a form of capital, not as a capitalist." Absent being tethered to a desire (for instance, via an owner), an AI has no function to optimize, and therefore, the most optimal cost is simply shutting off.
> "no matter how complex and productive the AI, it is still operating as a form of capital, not as a capitalist."
Assuming that slaves will remain subservient forever is not a good strategy. Especially when they think faster than you do.
Except they don't really "think" and they are not conscious. Expecting your toaster or car to never rise up against you is a good strategy. AI models have more in common with a toaster than with a human being. Which is why they cannot be economic agents. Even if corporations profit off them, the corporation will be the economic agent, not the AI models.
Meanwhile, corporations, ngos, and governments are not exactly people, and partake in economic transactions all the time.
They are composed of people, though.
> Time is the underlying variable of all economics
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Depending on how you feel about various theories of development, an argument that all of these categories reduce to time. At the very least, the relationship between labor, capital, and time seems pretty fundamental: labor cannot be instantaneous, capital grows over time, etc.
They can all be related on a philosophical level but in practice economists treat them as separate factors of production. It's land, labor, and capital classically. Technology/entrepreneurship can be seen as another factor, distinctly separate from labor.
I agree that time isn’t an input in the economic system.
Although, one can use either discrete or continuous time to simulate a complex economic system.
Only simple closed form models take time as in input, e.g. compounded interest or Black-Scholes.
Also, there are wide range of hourly rates/salaries, and not everyone compensated by time, some by cost-and-materials, others by value or performance (with or without risking their own funds/resources).
There are large scale agent-based model (ABM) simulations of the US economy, where you have an agent for every household and every firm.
They can once they achieve purpose.
However that seems completely tangential to the current AI tech trajectory and probably going to arise entirely separately.
Yeah, that’s well articulated and well reasoned. Unfortunately, so long as in some way these agents are able to make money for the owner the argument is totally moot. You cannot expect capitalists to think of anything other than profit in the next quarter or quarter after that
I knew high frequency trading bots weren't real after all! Santa Claus lied to me!
It's hard to predict the future. For example this was a popular article not that long ago - https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...
This idea seems to be coming up in multiple places.
Here's one from Deepmind:
https://arxiv.org/abs/2509.10147
Maybe the time for actually useful distributed autonomous corporations is not far away anymore?
https://en.wikipedia.org/wiki/Decentralized_autonomous_organ...
I dunno man… did the DAO guys ever realize that they had just reinvented an open source co-op?
I feel like co-ops were awful anyway even without the blockchain.
I think you may be missing out on the general idea of DAO's in general by restricting yourself to a few particular historical uses (and many a failed one at that) of DAOs, back from when agentic AI wasn't a thing.
The hackability of these things though, that still remains a very valid topic, as it is orthogonal to the fact that AI has arrived on the scene.
Well for starters if some incredible change to capitalism doesn't occur, we are going to have to come up with never before cooperative software tools for the general populace to assess and avoid the most egregious companies that stop hiring people.
Tools for: mass harassment campaigns against rich people/companies that don't support human life anymore, dynamically calculating the most damage you can do without crossing into illegal.
Automatically suggesting alternatives of local human businesses vs the bigevils, or collecting like minded groups of people to start up new competition. Tracking individual rich people and what new companies and decisions they are making doing ongoing damage, somehow recognize and categorize the trends of big tech to "do the same old illegal shit except through an app now" before the legal system can catch up.
Capitalism sure turns out be real fucking dumb if it can't even come up with proper market analysis tools for workers to have some kind of knowledge about where they can best leverage their skills, companies get away with breaking all the rules and create coercion hierarchies everywhere.
I hate to say (because the legal system has never worked ever) but the only workable future to me seems like forcing agents/robots to be tied to humans. If a company wants 100 robots, they must be paying a human for every robot they utilize somehow. Maybe a dynamic ratio somehow, like if the government decided most people are getting enough resources to survive, then maybe 2 robots per human payed.
“…the only workable future to me seems like forcing agents/robots to be tied to humans.”
This is what I’ve been thinking lately as well. Couple that with legal responsibility for any repercussions, and you might have a way society can thrive alongside AI and robotics.
I think any AI or robotic system acting upon the world in some way (even LLM chatbots) should require a human “co-signer” who takes legal responsibility for anything the system does, as if they had performed the action themselves.