iambateman 10 hours ago

Claude Code is hard to describe. It’s almost like I changed jobs when I started using it. I’ve been all-in with Claude as a workflow tool, but this is literally steroids.

If you haven’t tried it, I can’t recommend it enough. It’s the first time it really does feel like working with a junior engineer to me.

  • arealaccount 10 hours ago

    Weirdly enough I have the opposite experience where it will take several minutes to do something, then I go in and debug for a while because the app has become fubar, then finally realize it did the whole thing incorrectly and throw it all away.

    And I reach for Claude quite a bit because if it worked as well for me like everyone here says, that would be amazing.

    But at best it’ll get a bunch of boilerplate done after some manual debugging, at worst I spend an hour and some amount of tokens on a total dead end

    • 0x_rs 9 hours ago

      Some great advice I've found that seems to work very well: ask it to keep a succinct journal of all the issues and roadblocks found during the project development, and what was done to resolve or circumvent them. As for avoiding bloating the code base with scatterbrained changes, having a tidy architecture with good separation of concerns helps leading it into working solutions, but you need to actively guide it. For someone that enjoys problem-solving more than actually implementing them, it's very fun.

      • taude 9 hours ago

        to continue on this, I wouldn't let claude or any agent actually create a project structure, i'd guide it in the custom system prompt. and then in each of the folders continue to have specific prompts for what you expect the assets to be coded like, and common behavior, libraries, etc....

        • gonzo41 5 hours ago

          So you've invented writing out a full business logic spec again.

          btw, I'm not throwing shade. I personally think upfront design through a large lumbering document is actually a good way to develop stuff. As you either do it upfront, or through endless iterations in sprints for years.

          • underdeserver 32 minutes ago

            The problem with waterfall wasn't the full business spec, it was that people wrote the spec once and didn't revise it when reality pushed back.

          • bugglebeetle 2 hours ago

            Yeah, my experience of working with Claude Code is that I’m actually far more conscientious about design. After using it for awhile, you get a good sense of its limits and how you need to break things down and spell thing out to overcome.

    • baka367 3 hours ago

      For me it fixed a library compatibility issue with React 19 in 10 mins and several nudges startign from the console error and library name.

      It would have been a half-day worth of adventure at least should i have done it myself (from diagnosing to fixing)

    • taude 9 hours ago

      do you create the claude.md files at several levels of your folder structure, so you can teach it how to do different things? Configuring these default system prompts is required to get it to work well.

      I'd definitely watch Boris's intro video below [1]

      [1] Boris introduction: https://www.youtube.com/watch?v=6eBSHbLKuN0 [2] summary of above video: https://www.nibzard.com/claude-code/

      • dawnerd 8 hours ago

        By the time you do all of that you might as well just write code by hand.

        • makeramen 2 hours ago

          You don't do it manually. You have claude do it once you’ve guided it back on track to remind itself not to do it next time.

        • serf 6 hours ago

          that's really just a scale question.

          Yes, I would write a 4 line bash script by myself.

          But if you're trading a 200 line comprehensive claude.md document for a module that might be 20k LoC? it's a different value proposition.

    • libraryofbabel 8 hours ago

      Sigh. As others have commented, over and over again in the last 6 months we've seen discussions on HN with the same basic variation of "Claude Code [or whatever] is amazing" with a reply along the lines of "It doesn't work for me, it just creates a bunch of slop in my codebase."

      I sympathize with both experiences and have had both. But I think we've reached the point where such posts (both positive and negative) are _completely useless_, unless they're accompanied with a careful summary of at least:

      * what kind of codebase you were working on (language, tech stack, business domain, size, age, level of cleanliness, number of contributors)

      * what exactly you were trying to do

      * how much experience you have with the AI tool

      * is your tool set up so it can get a feedback loop from changes, e.g. by running tests

      * how much prompting did you give it; do you have CLAUDE.me files in your codebase

      and so on.

      As others pointed out, TFA also has the problem of not being specific about most of this.

      We are still learning as an industry how to use these tools best. Yes, we know they work really well for some people and others have bad experiences. Let's try and move the discussion beyond that!

      • imiric 8 hours ago

        It's telling that you ask these details from a comment describing a negative experience, yet the top-most comment full of praises and hyperbole is accepted at face value. Let's either demand these things from both sides or from neither. Just because your experience matches one side, doesn't mean that experiences different from yours should require a higher degree of scrutiny.

        I actually think it's more productive to just accept how people describe their experience, without demanding some extensive list of evidence to back it up. We don't do this for any other opinion, so why does it matter in this case?

        > Let's try and move the discussion beyond that!

        Sharing experiences using anecdotal evidence covers most of the discussion on forums. Maybe don't try to police it, and either engage with it, or move on.

        • serf 5 hours ago

          >Let's either demand these things from both sides or from neither. Just because your experience matches one side, doesn't mean that experiences different from yours should require a higher degree of scrutiny.

          Sort of.

          The people that are happy with it and praising the avenues offered by LLM/AI solutions are creating codebases that fulfill their requirements, whatever those might be.

          The people that seem to be unhappy with it tend to have the universal complaints of either "it produces garbage" , or "I'm slower with it.".

          Maybe i'm showing my age here, but I remember these same exact discussions between people that either praised or disparaged search engines. The alternative being an internet Yellowpages (which was a thing for many years.)

          The ones that praised it tended to be people who were taught or otherwise figured out how to use metadata tags like date:/onsite: , whereas the ones that disparaged it tended to be the folks who would search for things like "who won the game" and then proceed to click every scam/porno link on this green Earth and then blame Google/gdg/lycos/whatever when they were exposed to whatever they clicked.

          in other words : proof is kind of in the pudding.

          I wouldn't care about the compiler logs from a user that ignored all syntax and grammar rules of a language after picking it up last week, either -- but it's useful for successful devs to share their experiences both good and bad.

          I care more about the opinions of those that know the rules of the game -- let the actual teams behind these software deal with the user testing and feedback from people that don't want to learn conventions.

        • libraryofbabel 8 hours ago

          I should have been clearer - I'd like to see this kind of information from positive comments as well. It's just as important. If someone is having success with Claude Code while vide-coding a toy app, I don't care. If they're having success with it on a large legacy codebase, I want them to write a blog post all about what they're doing, because that's extremely useful information.

        • gilfoy 7 hours ago

          It’s telling that they didn’t specifically address it at the negative experience and you filled that in yourself

          • rounce 7 hours ago

            It was the comment they replied to. If it was a general critique of the state of discourse around agentic tools and Claude Code in particular why not make it a top level comment?

            • libraryofbabel 6 hours ago

              Oh, because I wanted to illustrate that the discourse is exemplified by the pair of the GP comment (vague and positive) and the parent comment (vague and negative). Therefore I replied to the negative parent comment.

        • leptons 3 hours ago

          >But I think we've reached the point where such posts (both positive and negative) are _completely useless_, unless they're accompanied with a careful summary of at least:

          They did mention "(both positive and negative)", and I didn't take their comment to be one-sided towards the AI-negative comments only.

      • positron26 4 hours ago

        The framing has been rather problematic. I find these differences in premises are lurking below the conversations:

        - Some believe LLMs will be a winner-take-all market and reinforce divergences in economic and political power.

        - Some believe LLMs have no path of evolution and have therefore already plateaued and too low to be sustainable with these investments in compute, which would imply it's a flash in the pan that will collapse.

        - Some believe LLMs will all be hosted forever, always living in remote services because the hardware requirements will always be massive.

        - Some believe LLMs will create new, worse kinds of harm without enough offsetting creation of new kinds of defense.

        - Some believe LLMs and AI will only ever give low-skilled people mid-skill results and therefore work against high-skill people by diluting mid-end value without creating new high-end value for them.

        We need to be more aware of how we are framing this conversation because not everyone agrees on these big premises. It very strongly affects the views that depend on them. When we don't talk about these points and just judge and reply based on whether the conclusion reinforces our premises, the conversation becomes more political.

        Confirmation bias is a thing. Individual interests are a thing. Some of the outcomes, like regulation and job disruption, depend on what we generally believe. People know this and so begin replying and voting according to their interests, to convince others to aid their cause without respect for the truth. This can be counter-productive to the individual if they are wrong about the premises and end up pushing an agenda that doesn't even actually benefit them.

        We can't tell people not to advance their chosen horse at every turn of a conversation, but those of us who actually care about the truth of the conversation can take some time to consider the foundations of the argument and remind ourselves to explore that and bring it to the surface.

      • dejavucoder 8 hours ago

        Fair point.

        For context, I was using Claude Code on a Ruby + Typescript large open source codebase. 50M+ tokens. They had specs and e2e tests so yeah I did have feedback when I was done with a feature - I could run specs and Claude Code could form a loop. I would usually advise it to fix specs one by one. --fail-fast to find errors fast.

        Prior to Claude Code, I have been using Cursor for an year or so.

        Sonnet is particularly good at NextJS and Typescript stuff. I also ran this on a medium sized Python codebase and some ML related work too (ranging from langchain to Pytorch lol)

        I don't do a lot of prompting, just enough to describe my problem clearly. I try my best to identify the relevant context or direct the model to find it fast.

        I made new claude.md files.

        • zer00eyz 10 minutes ago

          I spend a fair amount of time tinkering in Home Assistant. My experience with that platform and LLM's can be summed up as "this is amazing".

          I also do a fair amount of data shuffling with Golang. My LLM experience there is "mixed".

          Then I deal with quite a few "fringe" code bases and problem spaces. There LLM's fall flat past the stuff that is boiler plate.

          "I work in construction and use a hammer" could mean framer, roofer or smashing out concrete with a sledge. I suspect that "I am a developer, I write code" plays out in much the same way, and those details dictate experience.

          Just based on the volume of ruby and typescript, and the overlap of the output of these platforms your experience is going to be pretty good. I would be curious if you went and did something less mainstream, and in a less common language (say Zig) if you would have the same feelings and feedback that you do now. Based on my own experience I suspect you would not.

      • state_less 8 hours ago

        Here's a few general observations.

        Your LLM (CC) doesn't have your whole codebase in context, so it can run off and make changes without considering that some remote area of the codebase are (subtly?) depending on the part that claude just changed. This can be mitigated to some degree depending on the language and tests in place.

        The LLM (CC) might identify a bug in the codebase, fix it, and then figure, "Well, my work here is done." and just leave it as is without considering ramifications or that the same sort of bug might be found elsewhere.

        I could go on, but my point is to simply validate the issues people will be having, while also acknowledging those seeing the value of an LLM like CC. It does provides useful work (e.g. large tedious refactors, prototyping, tracking down a variety of bugs, and so on...).

        • simonw 8 hours ago

          Right, which is why having a comprehensive test suite is such an enormous unlock for this class of technology.

          If your tests are good, Claude Code can run them and use them to check it hasn't broken any distant existing behavior.

          • dawnerd 8 hours ago

            Not always the case. It’ll just go and “fix” the tests to pass instead of fixing the core issue.

            • simonw 7 hours ago

              That used to happen a whole lot more. Recent Claudes (3.7, 4) are less likely to do that in my experience.

              If they DO do that, it's on us to tell them to undo that and fix things properly.

      • QuantumGood 6 hours ago

        Agree. It keeps getting closer to "I've had a negative experience with the internet ..."

      • reactordev 8 hours ago

        Seconded, that a summary description of your problem, codebase, programming dialect in use, should be included whenever a “<Model> didn’t work for me” response.

      • rstuart4133 7 hours ago

        > But I think we've reached the point where such posts (both positive and negative) are _completely useless_, unless they're accompanied with a careful summary of at least ...

        I use Claude many times a day, I ask it and Gemini to generate code most days. Yet I fall into the "I've never included a line of code generated by an LLM in committed code" category. I haven't got a precise answer for why that is so. All I can come up with is the code generated lacks the depth of insight needed to write a succinct, fast, clear solution to the problem someone can easily understand in in 2 years time.

        Perhaps the best illustration of this is someone proudly proclaimed to be they committed 25k lines in a week, with the help of AI. In my world, this sounds like they are claiming they have a way of turning the sea into ginger beer. Gaining the depth of knowledge required to change 25k lines of well written code would take me more than a week of reading. Writing that much in a week is a fantasy. So I asked them to show me the diff.

        To my surprise, a quick scan of the diff revealed what the change did. It took me about 15 minutes to understand most of it. That's the good news.

        The bad news it that 25k lines added 6 fields to a database. 2/3's were unit tests, perhaps 2/3's of the remainder was comments (maybe more). The comments were glorious in their length and precision, littered with ASCII art tables showing many rows in the table.

        Comments in particular are a delicate art. They are rarely maintained, so they can bit rot in downright misleading babble after a few changes. But the insight they provide into what author was thinking, and in particular the invariants he had in mind can save hours of divining it from the code. Ideally they concisely explain only the obscure bits you can't easily see from the code itself. Anything more becomes technical debt.

        Quoting Woodrow Wilson on the amount of time he spent preparing speeches:

            “That depends on the length of the speech,” answered the President. “If it is a ten-minute speech it takes me all of two weeks to prepare it; if it is a half-hour speech it takes me a week; if I can talk as long as I want to it requires no preparation at all. I am ready now.”
        
        Which is a round about way of saying I suspect the usefulness of LLM generated code depends more on how often a human is likely to read it, than of any of the things you listed. If it is write once, and the requirement is it works for most people in the common cases, LLM generated code is probably the way to go.

        I used PayPal's KYC web interface the other day. It looked beautiful, completely inline with the rest of PayPal's styling. But sadly I could not complete it because of bugs. The server refused to accept one page, it just returned to the same page with no error messages. No biggie, I phoned support (several times, because they also could not get past the same bug), and after 4 hours on the phone the job was done. I'm sure the bug will be fixed a new contractor. He spend an few hours on it, getting an LLM to write a new version, throwing the old code away, just as his predecessor did. He will say the LLM provided a huge productivity boost, and PayPal will be happy because he cost them so little. It will be the ideal application for an LLM - got the job done quickly, and no one will read the code again.

        I later discovered there was a link on the page that allowed me to skip past the problematic page, so I could at least enter the rest of the information. It was in a thing that looked confusingly like a "menu bar" on the left, although there was no visual hit any of the items in the menu were clickable. I clicked on most of them anyway, but they did nothing. While on hold for phone support, I started reading the HTML and found one was a link. It was a bit embarrassing to admit to the help person I hadn't clicked that one. It sped the process up somewhat. As I said, the page did look very nice to the eye, probably partially because of the lack of clutter created by visual hints on what was clickable.

        [0] https://quoteinvestigator.com/2012/04/28/shorter-letter/

      • 0x457 8 hours ago

        There are some tasks that it can fail and not, but a lot of "Claude Code [or whatever] is amazing" with a reply along the lines of "It doesn't work for me, it just creates a bunch of slop in my codebase." IMO is "i know how to use it" vs "I don't know how to use it" with a side of "I have good test coverage" vs "tests?"

    • jm4 10 hours ago

      You can tell Claude to verify its work. I’m using it for data analysis tasks and I always have it check the raw data for accuracy. It was a whole different ballgame when I started doing that.

      Clear instructions go a long way, asking it to review work, asking it to debug problems, etc. definitely helps.

      • vunderba 10 hours ago

        > You can tell Claude to verify its work

        Definitely - with ONE pretty big callout. This only works when a clear and quantifiable rubric for verification can be expressed. Case in point, I put Claude Code to work on a simple react website that needed a "Refresh button" and walked away. When I came back, the button was there, and it had used a combination of MCP playwright + screenshots to roughly verify it was working.

        The problem was that it decided to "draw" a circular arrow refresh icon and the arrow at the end of the semicircle was facing towards the circle centroid. Anyone (even a layman) would take one look at it and realize it looked ridiculous, but Claude couldn't tell even when I took the time to manually paste a screenshot asking if it saw any issues.

        While it would also be unreasonable to expect a junior engineer to hand-write the coordinates for a refresh icon in SVG - they would never even attempt to do that in the first place realizing it would be far simpler to find one from Lucide, Font Awesome, emojis, etc.

        • DrewADesign 2 hours ago

          In general, using your own symbol forms for interactions rather than taking advantage of people’s existing mental models is a bad idea. Even straying from known libraries is shaky unless you’re a competent enough designer to understand what specific parts of a visual symbol signify that specific idea/action, and to whom. From a usability perspective, you’re much better off not using a symbol at all than using the wrong one.

      • yakz 10 hours ago

        I second this and would add that you really need an automated way to do it. For coding, automated test suites go a long way toward catching boneheaded edits. It will understand the error messages from the failed tests and fix the mistakes more or less by itself.

        But for other tasks like generating reports, I ask it to write little tools to reformat data with a schema definition, perform calculations, or do other things that are fairly easy to then double-check with tests that produce errors that it can work with. Having it "do math in its head" is just begging for disaster. But, it can easily write a tool to do it correctly.

      • bigiain 7 hours ago

        > Clear instructions go a long way, asking it to review work, asking it to debug problems, etc. definitely helps.

        That's exactly what I learned. In the early 2000's, from three expensive failed development outsourcing projects.

    • wyldfire 8 hours ago

      I have seen both success and failure. It's definitely cool and I like to think of it as another perspective for when I get stuck or confused.

      When it creates a bunch of useless junk I feel free to discard it and either try again with clearer guidelines (or switch to Opus).

    • tcdent 10 hours ago

      This has a lot to do with how you structure your codebase; if you have repeatable patterns that make conventions obvious, it will follow them for the most part.

      When it drops in something hacky, I use that to verify the functionality is correct and then prompt a refactor to make it follow better conventions.

    • leptons 3 hours ago

      Have you tried vibing harder?

    • hnaccount_rng 10 hours ago

      Yeah that is kind of my experience as well. And - according to the friend who highly recommended it - I gave it a task that is "easily within its capabilities". Since I don't think I'm being gaslighted, I suspect it's me using it wrong. But I really can't figure out why. And I'm on my third attempt now..

  • polishdude20 43 minutes ago

    I found cursor much better than Claude Code. Running Claude code it did so many commands and internal prompting to get a small thing done and ate up tonnes of my quota. Cursor on the other hand did it super quick and straight to the point. Claude code just got stuck in grep hell

  • ivanech 8 hours ago

    Just got it at work today and it’s a dramatic step change beyond Cursor despite using the same foundation models. Very surprising! There was a task a month ago where AI assistance was a big net negative. Did the same thing today w/ Claude Code in 20ish minutes. And for <$10 in API usage!

    Much less context babysitting too. Claude code is really good at finding the things it needs and adding them to its context. I find Cursor’s agent mode ceases to be useful at a task time horizon of 3-5 minutes but Claude Code can chug away for 10+ minutes and make meaningful progress without getting stuck in loops.

    Again, all very surprising given that I use sonnet 4 w/ cursor + sometimes Gemini 2.5 pro. Claude Code is just so good with tools and not getting stuck.

    • iambateman 8 hours ago

      Cool! If you're on pro, you can use a _lot_ of claude code without paying for API usage, btw.

    • bn-l an hour ago

      Even though it’s the same model cursor adds a massive system prompt to every request. And it’s shit and lobotomises the models. After the rug pull I’m exclusive Claude code at the end of my billing period or when cursor cut me off the $60 a month plan—-which will probably come first—-a bit over halfway into my month.

  • ern 2 hours ago

    I liked Claude Code when I used it initially to document a legacy codebase. The developer who maintains the system reviewed the documentation, and said it was spot-on.

    But the other day I asked it to help add boundary logging to another legacy codebase and it produced some horrible, duplicated and redundant code. I see these huge Claude instruction files people share on social media, and I have to wonder...

    Not sure if they're rationing "the smarts" or performance is highly variable.

  • pragmatic 10 hours ago

    Could you elaborate a bit on the tasks,languages,domain etc you’re using it with?

    People have such widely varying experiences and I’m wondering why.

    • thegrim33 9 hours ago

      I find it pretty interesting that it's a roughly 2,500 word article on "using Claude Code" and they never once actually explain what they're using it for, what type of project they're coding. It's all just so generic. I read some of it then realize that there was absolutely no substance in what I just read.

      It's also another in my growing list of data points towards my opinion that if an author posts meme pictures in their article, it's probably not an article I'm interested in reading.

      • kraftman 9 hours ago

        Yeah I got about half way through before thinking "wow theres no information in this" and giving up.

    • _se 9 hours ago

      It's always POC apps in js or python, or very small libraries in other popular languages with good structure from the start. There are ways to make them somewhat better in other cases (automated testing/validation/linting being a big one), but for the type of thing that 95% of developers are doing day to day (working on a big, sprawling code base where none of those attributes apply), it's not close to being there.

      The tools really do shine where they're good though. They're amazing. But the moment you try to do the more "serious" work with them, it falls apart rapidly.

      I say this as someone that uses the tools every day. The only explanation that makes sense to me is that the "you don't get it, they're amazing at everything" people just aren't working on anything even remotely complicated. Or it's confirmation bias that they're only remembering the good results - as we saw with last week's study on the impact of these tools on open source development (perceived productivity was up, real productivity was down). Until we start seeing examples to the contrary, IMO it's not worth thinking that much about. Use them at what they're good at, don't use them for other tasks.

      LLMs don't have to be "all or nothing". They absolutely are not good at everything, but that doesn't mean they aren't good at anything.

      • ants_everywhere 7 hours ago

        > They're amazing. But the moment you try to do the more "serious" work with them, it falls apart rapidly.

        Sorry, but this is just not true.

        I'm using agents with a totally idiosyncratic code base of Haskell + Bazel + Flutter. It's a stack that is so quirky and niche that even Google hasn't been able to make it work well despite all their developer talent and years of SWEs pushing for things like Haskell support internally.

        With agents I'm easily 100x more productive than I would be otherwise.

        I'm just starting on a C++ project, but I've already done at least 2 weeks worth of work in under a day.

        • iammrpayments 2 minutes ago

          I’m going to ask what I’ve asked the last person here who said they are “10-20x” more productive:

          If you’re really that more productive, why don’t you quit your job and vibecode 10 ios apps (in your case that would be 50 to 100 proportionally)

        • _se 4 hours ago

          Share the codebase and what you're doing or, I'm sorry, you're just another example of what I laid out above.

          If you honestly believe that "agents" are making you better than Goole SWEs then you severely need to take a step back and reevaluate, because you are wrong.

        • cpursley 7 hours ago

          What do you mean “with agents”?

          • ants_everywhere 6 hours ago

            I've been using mainly gemini-cli and am starting to play around with claude code.

            • cpursley 6 hours ago

              Are you referring to those as agents or do you mean spinning separate/multiple agents out of sessions on them?

    • iambateman 3 hours ago

      I’m a TALL developer, so Laravel, Livewire, Tailwind, Alpine.

      It’s nice because 3/4 of those are well-known but not “default” industry choices and it still handles them very well.

      So there’s a Laravel CRM builder called Filament which is really fun to work in. Claude does a great job with that. It’s a tremendous amount of boilerplate with clear documentation, so it makes sense that Claude would do well.

      The thing I appreciate though is that CC as an agent is able to do a lot in one go.

      I’ve also hooked CC up to a read-only API for a client, and I need to consume all of the data on that API for the transition to a Filament app. Claude is currently determining the schema, replicating it in Laravel, and doing a full pull of API records into Laravel models, all on its own. It’s been running for 10 minutes with no interruption and I expect will perform flawlessly at that.

      I invest a lot of energy in prompt preparation. My prompts are usually about 200 words for a feature, and I’ll go back and forth with an LLM to make sure it thinks it’s clear enough.

    • rr808 4 hours ago

      I opinion is that the AI is the distilled average of all the code it can scrape. For the stuff I'm good at and work on every day it doesn't help much beyond some handy code completions. For stuff I'm below average at like bash commands and JS it helps me get up to average. The most valuable to me is if I can use it to learn something - it gives some good alternatives and ideas if you have something mainstream.

    • criddell 10 hours ago

      I haven't had great luck with Claude writing Windows Win32 (using MFC) in C++. It invents messages and APIs all the time that read like exactly what I want it to do.

      I'd think Win32 development would be something AIs are very strong at because it's so old, so well documented, and there's a ton of code out there for it to read. Yet it still struggles with the differences between Windows messages, control notification messages, and command messages.

  • kbuchanan 10 hours ago

    I've had the same experience, although I feel like Claude is far more than a junior to me. It's ability to propose options, make recommendations, and illustrate trade-offs is just unreal.

  • kobe_bryant 7 hours ago

    in what sense, instead of doing your job which I assume you've been doing successfully for many years you now ask Claude to do it for you and then have to review it?

  • giancarlostoro 5 hours ago

    I am loving the Zed editor and they integrate Claude primarily so I might give it a shot.

  • komali2 3 hours ago

    Does anyone have any usage guides they can recommend to feel this way about using Claude code, other than the OP article? I fired it up yesterday for about an hour and tried it on a couple tickets and it felt like a total waste of time. The answers it gave were absurdly incorrect - I was being quite specific in my prompting and it seemed to be acquiring the proper context, but just doing nothing like what I was asking.

    E.g. I asked it to swap all on change handlers in a component to modify a use State rather than directly fire a network request, and then add on blurs for the actual network request. It didn't add use states and just added on blurs that sent network requests to the wrong endpoint. Bizarre.

  • satisfice 2 hours ago

    Are you doing anything useful? How can anyone outside of yourself know this?

    My own experiments only show that this technology is unreliable.

  • apwell23 9 hours ago

    half the posts on hackernews is same discussion over and over about coding agent usefulness or lack of

  • gjsman-1000 10 hours ago

    > It’s the first time it really does feel like working with a junior engineer to me.

    I have mixed feelings; because this means there’s really no business reason to ever hire a junior; but it also (I think) threatens the stability of senior level jobs long term, especially as seniors slowly lose their knowledge and let Claude take care of things. The result is basically: When did you get into this field, by year?

    I’m actually almost afraid I need to start crunching Leetcode, learning other languages, and then apply to DoD-like jobs where Claude Code (or other code security concerns) mean they need actual honest programmers without assistance.

    However, the future is never certain, and nothing is ever inevitable.

    • kimixa 2 hours ago

      It's a junior engineer that doesn't learn - they make the same mistakes even after being corrected the second that falls out their context window (even often with "corrections" still there...), they struggle to abstract those categories of mistakes to avoid making similar ones in the future, and (by the looks of it) will never be "the senior". "Hiring a Junior" should really be seen as an investment more than immediate output.

      I keep being told that $(WHATEVER MODEL) is the greatest thing ever, but every time I actually try to use them they're of limited (but admittedly non-zero) usefulness. There's only so many breathless blogs or comments I can read that just don't mesh with the reality I personally see.

      Maybe it's sector? I generally work on Systems/OS/Drivers, large code bases in languages like C, C++ and Rust. Most larger than context windows even before you look at things like API documentation. Even as a "search and summarizer" tool I've found it completely wrong in enough cases to be functionally worthless as the time required to correct and check the output isn't a saving. But they can be handy for "autocompletion+" - like "here's a similar existing block of code, now do the same but with (changes)".

      They generally seem pretty good at being like a template engine on non-templated code, so thing like renaming/refactoring or similar structure recognition can be handy. Which I suspect might also explain some of those breathless blog posts - I've seen loads which say "Any non-coder can make a simple app in seconds!" - but you could already do that, there's a million "Simple App Tutorial" codebases that would match whatever license you want, copy one, change the name at the top and you're 99% of the way to the "Wow Magic End Result!" often described.

    • moomoo11 2 hours ago

      We are using probabilistic generators to output what should be deterministic solutions.

    • Quarrelsome 9 hours ago

      > because this means there’s really no business reason to ever hire a junior

      aren't these people your seniors in the coming years? Its healthy to model an inflow and outflow.

      • toomuchtodo 8 hours ago

        The pipeline dries up when orgs would rather get the upfront savings of gen AI productivity gains versus invest in talent development.

  • dejavucoder 10 hours ago

    Almost feels like a game as you level up!

  • dude250711 7 hours ago

    How are you guys happy with an 80-s looking terminal interface is beyond me...

    If Claude is so amazing, could Anthropic not make their own fully-featured yet super-performant IDE in like a week?

    • yoyohello13 3 hours ago

      Free yourself from the shackles of the GUI.

djaychela an hour ago

Can someone offer me some help? I've just been messing about "vibe coding" little python apps with local llm, continue and vscode. And I got so far with it.

Then I found the free tier of claude so I fed in the "works so far" version with the changes that the local llm made, and it fixed and updated all the issues (with clear explanation) in one go. Success!

So my next level attempt was to get all the spec and prompts for a new project (a simple manic miner style 2d game using pygame). 8 used chat gpt to craft all this and it looked sensible to me with appropriate constraints for different parts of the projrct.

Which claude created. But it keeps referring to a method which it says is not present in the code and that I'm running the wrong version. (I'm definitely not). I've tried indicating it by reference to the line number and the surrounding code but it's just gas lighting me.

Any ideas how to progress from this? I'm not expecting perfection, but it seems it's just taken me to a higher level before it runs into essentially the same issue as the local llm.

All advice appreciated, I'm just dabbling with this four a bit of fun when I can (I'm pretty unwell so do things as and when I feel up to it)

Thanks in advance.

  • postalcoder 31 minutes ago

    It's likely you're running into "too deep into mediocre code with unclear interfaces and a lot of hidden assumptions hell" that LLMs are generally poor at handling. If you're running into an inextricable wall then it's better to do a controlled demolition.

    ie, take everything written by chatgpt and have the highest-quality model you have summarize what the game does, and break down all the features in depth.

    Then, take that document and feed it into claude. It may take a few iterations but the code you get will be much better than your attempt on iterating on the existing code.

    Claude will likely zero-shot a better application or, at least, one that it can improve on itself.

    If claude still insists on making up new features then install the context7 MCP server and ask it to use context7 when working on your request.

    • djaychela 10 minutes ago

      Thanks.

      I think I should have made it more clear in my post, the code is claude's and was done from scratch (the first app was a mandelbrot viewer which it added features to, this is a platfrom game).

      It's a single file at the moment (I did give a suggested project structure with files for each area of responsibility) and it kind-of-works.

      I think I could create the missing method in the class but wanted to see if it was possible by getting the tools to do it - it's as much of an experiment in the process and the end result.

      Thanks for replying, I shall investigate what you've suggested and see what happens.

erentz 9 hours ago

There must at this point be lots and lots of actual walkthroughs of people coding using Claude Code, or whatever, and producing real world apps or libraries with them right? Would be neat to have a list because this is what I want to read (or watch), rather than people just continuously telling me all this is amazing but not showing me it’s amazing.

  • yoyohello13 3 hours ago

    Something really feels off about the whole thing. I use Claude code. I like it, it definitely saves me time reading docs or looking on stack overflow. It’s a phenomenal tool.

    If we are to believe the hype though, shouldn’t these tools be launching software into the stratosphere? Like the CEO of stripe said AI tools provide a x100 increase in productivity. That was 3-4months ago. Shouldn’t stripe be launching rockets in to space now since that’s technically 400months of dev time? Microsoft is reportedly all in on AI coding. Shouldn’t Teams be the best, most rock solid software in existence now? There is so much hype around these tools being a super charger for more than a year, but the actual software landscape looks kind of the same to me as it did 3-4 years ago.

    • dmix 2 hours ago

      Whatever it produces still needs to be carefully reviewed and guided. Context switching as a human programmer is very hard so you need to focus on the same specific task, which is harder to not switch to social media or IRL while waiting for it. And you're going to be on the same branch for the same ticket, git doesn't let you do multiple at once (at least I'm not set up to). Not sure where the productivity scaling would come from outside of rapid experimentation on bad ideas until you find the right one, and of course rapid autocomplete, faster debugging, much fancy global find/replace type stuff.

      I use it quite aggressively and I'd probably only estimate 1.5x on average.

      Not world changing because we all mostly work on boring stuff and have endless backlogs.

    • AdieuToLogic an hour ago

      Perhaps the hype is not intended for developer consumption, even though often worded as if it were, but instead meant for investors (such as VC's).

    • t0lo 26 minutes ago

      Despite living in an age of supposedly transformative brainstorming and creative technologies, we’re also paradoxically inhabiting a time with less creativity and vision than ever. :)

    • uludag 2 hours ago

      Maybe this discrepancy is down to something like Claude code reducing the amount of brain power exhorted. If you have to do 80% less thinking to accomplish a task but the task takes just as long, you may (even rightfully) feel five times more productive even though output didn't change.

      And is this a good thing since you can (in theory) multitask and work longer hours, or bad because you're acquiring cognitive debt (see "Your Brain on ChatGPT")?

    • deadbabe 2 hours ago

      Code is not the bottleneck.

      • AdieuToLogic an hour ago

        > Code is not the bottleneck.

        Understanding the problem to solve is.

  • ookblah 3 hours ago

    this sounds like a cop out, but honestly you will probably see the most vocal on both sides of this here while vast majority are just quietly doing working and doing their stuff (ironic i'm writing this).

    i feel like some kind of shill, but honestly i'm anywhere from 1.5x to 10x on certain tasks. the main benefit is that i can reduce a lot of cognitive load on tasks where they are either 1) exploratory 2) throwaway 3) boilerplate-ish/refactor type stuff. because of that i have a more consistent baseline.

    i still code "by hand". i still have to babysit and review almost all the lines, i don't just let it run for hours and try to review it at the end (nightmare). production app that's been running for years. i don't post youtube videos bc i don't have the time to set it up and try to disprove the "naysayers" (nor does that even matter) and its code i can't share.

    the caveat here is we are a super lean team so probably i have more context into the entire system and can identify problems early on and head them off. also i have a vested interest in increasing efficiency for myself wheras if you're part of a corpo ur probably doing more work for the same comp.

    • magicalist 3 hours ago

      > this sounds like a cop out

      This may sound more mean than I intend, but your comment is exactly the kind of thing the GP post was describing as useless yet ubiquitous.

      • ookblah 3 hours ago

        i mean i get it, but at some point you have to either assume everyone is lying or we are all bots or something. it's probably the same feeling i get when i read someone having trouble and my first thought is like "they aren't using it right". i'm sure the reverse is something like "they aren't a real programmer" LOL.

  • TheRoque 7 hours ago

    100% agree, I have been looking for a YouTube video or stream of someone leveraging AI to get their productivity boost, but I haven't found anything that made me think "okay, that really speeds up things"

    • aniforprez 3 hours ago

      It was extremely telling to me that the Zed editor team put out a video about using their AI interface and I don't remember what model they used but they asked it to add a new toggle for a feature and then spent half they video demonstrating their admittedly excellent review workflow to accept or reject the AI generated code and you could directly see how useless it was down to adding completely superfluous new lines randomly and the explanation was "it just does that sometimes"

      I'm really not seeing these massive gains in my workflow either and maybe it's the nature of my general work but it's baffling how every use case for programming I'm seeing on YouTube is so surface level. At this point I've given up and don't use it at all

graphememes 7 hours ago

It's great for me. I have a claude.md at the root of every folder generally, outlined in piped text for minimal context addition about the rulesets for that folder, it always creates tests for what it's doing and is set to do so in a very specific folder in a very specific way otherwise it tries to create debug files instead. I also have set rules for re-use so that way it doesn't proliferate with "enhanced" class variants or structures and always tries to leverage what exists instead of bringing in new things unless absolutely necessary. The way I talk to it is very specific as well, I don't write huge prose, I don't set up huge PRDs and often I will only do planning if its something that I am myself unsure about. The only time I will do large text input is when I know that the LLM won't have context (it's newer than it's knowledge window).

I generally get great 1-shot (one input and the final output after all tasks are done) comments. I have moved past claude code though I am using the CLI itself with another model although I was using claude code and my reason for switching isn't that claude was a bad model it's just that it was expensive and I have access to larger models for cheaper. The CLI is the real power not the model itself per-se. Opus does perform a little better than others.

It's totally made it so I can do the code that I like to do while it works on other things during that time. I have about 60-70 different agent streams going at a time atm. Codebases sizes vary, the largest one right now is about 200m tokens (react, typescript, golang) in total and it does a good job. I've only had to tell it twice to do something differently.

  • jatora 2 hours ago

    Can you list some of your agent streams you have going? Very curious

  • leonidasv 4 hours ago

    Which models do you use instead of Anthropic ones?

    I've only tried Claude Code with an external model once (Kimi K2) but it performed poorly.

Imanari 8 hours ago

PSA: you can use CC with any model via https://github.com/musistudio/claude-code-router

The recent Kimi-K2 supposedly works great.

  • nxobject 4 hours ago

    Corollary if you're unfamiliar with how CC works (because you've never been able to consider it for its price, like me) – the CC client is freely available over 'npm'.

  • chrismustcode 7 hours ago

    I’d just use sst/opencode if using other models (I use it for Claude through Claude pro subscription too)

ctoth 11 hours ago

Sometimes, you'll just have a really productive session with Claude Code doing a specific thing that maybe you need to do a lot of.

One trick I have gotten some milage out of was this: have Claude Code research Slash commands, then make a slash command to turn the previous conversation into a slash command.

That was cool and great! But then, of course you inevitably will interrupt it and need to do stuff to correct it, or to make a change or "not like that!" or "use this tool" or "think harder before you try that" or "think about the big picture" ... So you do that. And then you ask it to make a command and it figures out you want a /improve-command command.

So now you have primitives to build on!

Here are my current iterations of these commands (not saying they are optimal!)

https://github.com/ctoth/slashcommands/blob/master/make-comm...

https://github.com/ctoth/slashcommands/blob/master/improve-c...

  • whatever1 10 hours ago

    I find amazing all the effort that people put trying to program a non deterministic black box. True courage.

    • ctoth 10 hours ago

      Oh do let me tell you how much effort I put into tending my non-deterministic garden or relationships or hell even the contractors I am using to renovate my house!

      A few small markdown documents and putting in the time to understand something interesting hardly seems a steep price!

      • blub 10 hours ago

        The contractors working on my house sometimes paint a room bright pink for no particular reason.

        When I point that out, they profusely apologize and say that of course the walls must be white and wonder why they even got the idea of making them pink in the first place.

        Odd, but nice fellows otherwise. It feels like they’re 10x more productive than other contractors.

        • savanaly 37 minutes ago

          Also, they paint the whole room for $2.41! When you add that critical detail the analogy comes into focus.

          People aren't excited about AI coding because it's so much better than human coders. They're excited because it's within spitting distance while rounding down to free.

        • ctoth 9 hours ago

          I asked my contractor to install a door over the back stairs opening outward, came back, and it was installed opening inward. He told me he tried to figure out a way he could do it in-code, but there wasn't one, so that's what he had to do. I was slightly miffed he didn't consult me first, but he did the pragmatic thing.

          This actually happened to me Monday.

          But sure, humans are deterministic clockwork mechanisms!

          Are you now going to tell me how I got a bad contractor? Because that sure would sound a lot like "you're using the model wrong"

          • roywiggins 2 hours ago

            The big difference is that when questioned he didn't profusely apologize and immediately get to work undoing it, but instead gave you a reason that was probably causally related to the decision he made and not a retroactive justification made up on the spot.

        • tick_tock_tick 9 hours ago

          I mean I get your trying to make a joke but a contractors fucking up paint and trying to gaslight you into believing it's the one you signed off on isn't that rare.

    • simlevesque 10 hours ago

      Our brains are non deterministic black box. We just don't like to admit it.

    • johnfn 6 hours ago

      You might also find it amazing that people work with colleagues and give them feedback!

      • aniforprez 3 hours ago

        Colleagues take that feedback and improve. It seems a large number of people here seem to find these kind of improvements in other humans useless in the hypothetical case of them leaving but I occasionally look at my mentee's LinkedIn and feel a sense of pride at having contributed. I also feel joy when they appreciate my efforts.

        I genuinely don't understand how often people compare AI to junior developers.

bluetidepro 10 hours ago

How are people using this without getting rate limited non stop? I pay for Claude Pro and I sometimes can’t go more than 5 prompts in an hour without it saying I need to wait 4 hours for a cooldown. I feel like I’m using it wrong or something, it’s such a frustrating experience. How do you give it any real code context without using all your tokens so quickly?

  • SwiftyBug 9 hours ago

    I've been using it pretty heavily and never have I been rate limited. I'm not even on the Pro Max plan.

  • tomashubelbauer 10 hours ago

    I have the same issue and in recent days I seem to have gotten an extra helping of overload errors which hit extra hard when I realize how much this thing costs.

    Edit: I see a sibling comment mention the Max plan. I wanna be clear that I am not talking about rate limits here but actual models being inaccessible - so not a rate limit issue. I hope Anthropic figures this out fast, because it is souring me on Claude Code a bit.

  • mbrumlow 10 hours ago

    No clue. I use it for hours on end. Longest run cost me $30 in tokens. I think it was 4 hours of back and forth.

    Here is an example of chat gpt, followed by mostly Claude that finally solved a backlight issue with my laptop.

    https://github.com/mbrumlow/lumd

    • singron 9 hours ago

      I haven't used Claude Code a lot, but I was using about $2-$5/hour, but it varied a lot. If I used it 6 hours/day and worked a normal 21 workday month (126 hours), then I would rack up $250-$630/month in API costs. I think I could be a more efficient with practice (maybe $1-$3/hour?). If you think you are seriously going to use it, then the $100/month or $200/month subscriptions could definitely be worth it as long as you aren't getting rate limited.

      If you aren't sure whether to pull the trigger on a subscription, I would put $5-$10 into an API console account and use CC with an API key.

  • manmal 9 hours ago

    Try giving it a repomap, eg by including it in CLAUDE.md. It should pull in less files (context) that way. Exactly telling it which files you suspect need editing also helps. If you let it run scripts, make sure to tell it to grep out only the relevant output, or pipe to /dev/null.

  • ndr_ 9 hours ago

    I had success through Amazon Bedrock on us-east1 during European office hours. Died 9 minutes before 10 a.m. New York time, though.

  • terhechte 10 hours ago

    you need the max plan to break free of most rate limits

    • bluetidepro 10 hours ago

      I wish there was a Max trial (while on Pro) to test if this was the case or not. Even if it was just a 24 hour trial. Max is an expensive trigger to pull, and hope it just solves this.

      • cmrdporcupine 8 hours ago

        FWIW I went Claude Max after Pro, and the trick is to turn off Opus. If you do that you can pretty much use Sonnet all working day in a normal session. I don't personally find Opus that useful, and it burns through quota at 5x the speed of Sonnet.

        • wahnfrieden 4 hours ago

          It is typical to buy 2-3 Max tier plans for sustained Opus use

  • stavros 8 hours ago

    Are you using Opus?

ipaddr 6 hours ago

What I wonder is how is the interview process now? Are they testing you with AI or without? Is leet code being asked with AI proving the answer?

Is there a bigger disconnect on how you are judged in an interview vs the job now?

How are the AI only developers handling this?

  • ct0 6 hours ago

    The projects you work on and the impact that they had. Hopefully.

tortila 9 hours ago

After reading and hearing rave reviews I’d love to try Claude Code in my startup. I already manage Claude Team subscription, but AFAIK Code is not included, it only exists in Pro/Max which are for individual accounts. How do people use it as a subscription for a team (ideally with central billing)?

  • dukeyukey 8 hours ago

    You can use CC with AWS Bedrock, with all the centralised billing AWS offers. That's how my company handles it.

ChuckMcM 7 hours ago

Reading this I can see these tools as training tools for software engineering managers.

ToJans 9 hours ago

Whenever I'm rate limited (pro max plan), I stop developing.

For anything but the smallest things I use claude code...

And even then...

For the bigger things, I ask it to propose to me a solution (when adding new features).

It helps when you give proper guidance: do this, use that, avoid X, be concise, ask to refactor when needed.

All in all, it's like a slightly autistic junior dev, so you need to be really explicit, but once it knows what to do, it's incredible.

That being said, whenever you're stuck on an issue, or it keeps going in circles, I tend to rollback, ask for a proper analysis based on the requirements, and fill in the details of necessary.

For the non-standard things (f.e. detect windows on a photo and determine the measurement in centimetres), you still have to provide a lot of guidance. However, once I told it to use xyz and ABC it just goes. I've never written more then a few lines of PHP in my life, but have a full API server with an A100 running, thanks to Claude.

The accumulated hours saved are huge for me, especially front-end development, refactoring, or implementing new features to see if they make sense.

For me it's a big shift in my approach to work, and I'd be really sad if I have to go back to the pre-AI area.

Truth to be told, I was a happy user of cline & Gemini and spent hundreds of dollars on API calls per month. But it never gave me the feeling Claude code gave me, the reliability for this thing is saving me 80% of my time.

  • dontlaugh 9 hours ago

    I still don’t get why I should want that.

    I’ve mentored and managed juniors. They’re usually a net negative in productivity until they are no longer juniors.

    • quesera 6 hours ago

      My current working theory is this:

      People who enjoy mentoring juniors are generally satisfied with the ROI of iterating through LLM code generation.

      People who find juniors sort-of-frustrating-but-part-of-the-job-sometimes have a higher denominator on that ROI calc, and ask themselves why they would keep banging their head against the LLM wall.

      The first group is probably wiser and more efficient at multiplying their energies, in the long term.

      I find myself in the second group. I run tests every couple months, but I'm still waiting for the models to have a higher R or a lower I. Any day now.

      • moomoo11 an hour ago

        I'm cynical person, and IME the former are some of the most annoying and usually the worst engineers I've met.

        Most people who "mentor" other people (like, make it a pride and distinction part of their identity) are usually the last people you want to take advice from.

        Actual mentors are the latter group, who juniors seek out or look up to.

        In other words, the former group is akin to those people on YouTube who try to sell shitty courses.

wrs 10 hours ago

There is a VS Code extension for Claude Code. It's hardly more than a terminal window really, but that in itself is pretty handy. If you do /ide to connect up the extension it does a few things, but not yet anything resembling the Cursor diff experience (much less the Cursor tab experience, which is the reason I still use it).

  • mike1o1 9 hours ago

    Claude Code has pretty much replaced Copilot overnight for me, though I wish the VS Code plugin was a bit more integrated, as it's only a little bit more than a terminal, though I guess that's the point. I was hoping for syntax highlighting to match my editor and things like that (beyond just light/dark theme).

    What I'd really want is a way to easily hide it, which I did quite frequently with Copilot as its own pane.

  • dejavucoder 10 hours ago

    I use Claude Code 50% of times with Cursor now due to the diff and tab. The extension is just a bit buggy sometimes otherwise I would use it much more. I hit some node related bugs today while searching stuff with it (forgot to report to Anthropic lol). Other bugs include a scroll stuttering.

to-too-two 5 hours ago

Anyone using it for game dev? Like just having the agent try to build a game?

  • singron 5 hours ago

    I tried using aider with godot. CC would probably be better. Aider with 4o/o3-mini wasn't very good at gdscript, and it was terrible at editing tres/tscn files (which are usually modified through the editor). If you had a very code-centric game, it could turn out OK, but if you have resources/assets that you normally edit with special programs, it is going to struggle.

    • wahnfrieden 4 hours ago

      You should be using the best models. Try o3-pro

deeshee 10 hours ago

It's great to see even the most hardcore developers who are not fond of change being happy with the latest releases related to AI-assisted development.

My workflow now boils down to 2 tools really - leap.new to go from 0 to 1 because it also generates the backend code w/ infra + deployment and then I pick it up in Zed/Claude Code and continue working on it.

  • 2sk21 8 hours ago

    I'm curious: Do you scrutinize every line of code that's generated?

    • cmrdporcupine 8 hours ago

      At first I dd not. Now I have learned I have to.

      You have to watch Claude Code like a hawk. Because it's inconsistent. It will cheat, give up, change directions, and not make it clear to you that is what it's doing.

      So, while it's not "junior" in capabilities, it is definitely "junior" in terms of your need as a "senior" to thoroughly review everything it does.

      Or you'll regret it later.

  • ardit33 10 hours ago

    1.So far, it is great if you know what you want, and tell it exactly how you want it, and AI can help you on that (basically intern level work).

    2. When you are in a new area, but you don't want to dive deep and just want something quick and it is not core of the app/service.

    But, if you are experienced, you can see how AI can mess things up pretty quickly, hence for me it has been best used to 'fill in clear and well defined functionality' at peacemeal. Basically it is best for small bites, then large chunks.

    • deeshee 9 hours ago

      I agree. But it's also a mindset game. Experienced devs often approach AI with preconceptions that limit its utility - pride in "craftsmanship, control issues, and perfectionism can prevent seeing where AI truly shines. I've found letting go of those instincts and treating AI as a thought partner rather than just a code generator be super useful. The psychological aspects of how we interact with these tools might be as important as the technical ones.

      Bunch of comments online also reflect how there's a lot of "butthurt" developers shutting things down with a closed mind - focusing only on the negatives, and not letting the positives go through.

      I sound a bit philosophical but I hope I'm getting my point across.

      • financltravsty 9 hours ago

        What's your track record. What is your current scope of work for Claude Code?

        This conversation is useless without knowing the author's skillset and use-case.

      • Quarrelsome 9 hours ago

        > pride in "craftsmanship, control issues, and perfectionism

        I mean, do we really want our code base to not follow a coding standard? Or are network code not to consider failure or transactional issues? I feel like all of these traits are hallmarks of good senior engineers. Really good ones learn to let go a little but no senior is going to watch a dev automated or otherwise, circumvent six layers of architecture by blasting in a static accessor or smth.

        Craftsmanship, control issues and perfectionism, tend to exist for readability, to limit entropy and scope, so one can be more certain of the consequences of a chunk of code. So to consider them a problem is a weird take to me.

komali2 3 hours ago

One thing I'm slightly anxious about in this new LLM world is whether the prices I'm paying are sustainable. I crank the fuck out of cursor and I think we're paying like 40 bucks a month for the privilege. Is this early Uber where it was unbelievable how cheap the rides were? In 2030 am I going to have gotten dependent on ai assisted levels of productivity, be making client and customer promises based on that expectation, but suddenly find myself looking at 1k+ bills, now that all the ai companies need to actually make money?

voicedYoda 11 hours ago

Be lovely if i could sign up for Claude using my g voice number

  • fuzzy2 9 hours ago

    Or no number even. Sucks to be missing out, but I won’t budge on this.

jwpapi 6 hours ago

Wait till he learns about aider

  • jamil7 an hour ago

    I use both for different tasks, Aider is a sharp knife and claude code more of a blunt instrument.

kypro 8 hours ago

HN has flipped so quickly on saying how AI produces unreliable slop, to most people using it to replace junior devs at their org – something I was heavily criticised for saying orgs should be doing a few months back.

Progress doesn't end here either, imo CC is more a mid-level engineer with a top-tier senior engineer's knowledge. I think we're getting to the point where we can begin to replace the majority of engineers (even seniors) for just a handful of seniors engineers to prompt and review AI produced code and PRs.

Not quite there yet, of course, but definitely feeling that shift starting now... There's going to be huge productivity boosts for tech companies towards the end this year if we can get there.

Exciting times.

  • dude250711 7 hours ago

    How come CC is a crappy terminal instead of some super-nice environment built by Anthropic via CC itself?

    It should be capable of rebuilding VS Code but better, no?

  • hooverd 8 hours ago

    Where do the juniors come from?

38 8 hours ago

Claude is absolute trash. I am on the paid plan and repeatedly hit the limits. and their support is essentially non existing, even for paid accounts

  • wahnfrieden 4 hours ago

    The only plan worth considering is the $200 Max tier. And it is typical to pay for 2 or 3 of them.