I was all set to be pissed off, "you can't tell me what I can make with your product once you've sold it to me!" but no... This outrage bait hinges on the definition of "use"
You can use Claude Code to write code to make a competitor for Claude Code. What you cannot do is reverse engineer the way the Claude Code harness uses the API to build your own version that taps into stuff like the max plan. Which? makes sense?
From the thread:
> A good rule of thumb is, are you launching a Claude code oauth screen and capturing the token. That is against terms of service.
What's worse is that Anthropic also goes after customers who bought their services through intermediaries, even if the service was API (not subscription).
If you use Claude models through CURSOR, Anthropic still applies its own policies on usage. Just recently they cut off xAI employees' access to Claude models on Cursor [0]. X has threatened to ban Anthropic from X.
And Anthropic really goes out their way in banning China. Other model providers do a decent job at restricting access in general but look away when someone tries to circumvent those restrictions. But Claude uses extra mechanisms to make it hard. And the CEO was on record about China issues: https://www.cnbc.com/2025/05/01/nvidia-and-anthropic-clash-o...
Yes, I think this makes sense. I think if you are paying by token w/ an API key, then you're good to go, but if you're hijacking their login system then that's a different story.
It might suffice to just make it look like you’re European to keep their goons from harassing you, which hasn’t happened yet but will, because that’s how these stories always end. Get a PO Box for a credit card and VPN in through Europe.
I think one might argue even a VPN might be enough. Theoretically someone might be European and can have American card or any other countries cards and it would generally be okay.
So the only thing you kind of need to figure out is VPN and ProtonVPN provides free vpn service which does include EU servers access as well
I wonder if Claude Code or these AI services block VPN access though.
If they do, ehh, just buy a EU cheap vps (hetzner my beloved) and call it a day plus you also get a free dev box which can also run your code 24x7 and other factors too.
What do you mean by void? Sure you cannot be sued for writing clone, in any country. All they can do is ban account and I think they can ban any account in EU.
This whole thing got blown out of proportion because the devs of third party harnesses that use the oauth API never disclosed that they were already actively sidestepping what is a very obvious message that the oauth API is for Claude Code only. What changed recently is that they added more restrictions for the shape of the payloads it accepts, not that they just started adding restrictions for the first time.
TLDR You cannot reverse engineer the oauth API without encountering this message:
Based on the terms, Section 3, subsection 2 prohibits using Claude/Anthropic's Services:
"To develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models or resell the Services."
Clarification:
This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.
What IS prohibited:
- Using Claude to develop a competing AI assistant or chatbot service
- Training models that would directly compete with Claude's capabilities
- Building a product that would be a substitute for Anthropic's services
What is NOT prohibited:
- General ML/AI development for your own applications (computer vision, recommendation systems, fraud detection, etc.)
- Using Claude as a coding assistant for ML projects
- Training domain-specific models for your business needs
- Research and educational ML work
- Any ML development that doesn't create a competing AI service
In short: I can absolutely help you develop and train ML models for legitimate use cases. The restriction only applies if you're trying to build something that would compete directly with Claude/Anthropic's core business.
So you can't use Claude to build your own chatbot that does anything remotely like Claude, which would be, basically any LLM chatbot.
This seems reasonable at first glance, but imagine applying it to other development tools — "You can't use Xcode/Visual Studio/IntelliJ to build a commercial IDE", "You can't use ICC/MSVC to build a commercial C/C++ compiler", etc.
In this case it’s “You can’t use our technology to teach your thinking machine from our stealing of other people’s work, because our AI is just learning stuff, not stealing, and you are stealing from us, because we say so.”
When it comes to AI stealing all IP in the world, I really don't give a crap.
What I do give a crap about is the AI companies being little bitches when you politely pilfer what they have already snatched. Their hypocrisy is unlimited.
but also the prohibition goes way further as it's not limited to training competing LLMs but also for programming any of the plumbing etc. around it ....
> This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.
I am not a lawyer, regardless of the list of examples below(I have been told examples in contracts and TOS are a mixed bag for enforceability), this text says that if anthropic decides to make a product like yours you have to stop using Claude for that product.
That is a pretty powerful argument against depending heavily on or solely on Claude.
I know we want to turn everything into a rental economy aka the financialization of everything, but this is just super silly.
I hope we're 2-3 years away, at most, from fully open source and open weights models that can run on hardware you can buy with $2000 and that can complete most things Opus 4.5 can do today, even if slower or that needs a bit more handholding.
That's different, though. If 20 other companies can host these models, you still have to trust them. The end result should be cheap hardware that's good enough to large a solid, mature LLM that can code comparably to a fast junior dev.
Only a few memory suppliers remain, after years of competition, and they have intentionally reduced NAND wafer supply to achieve record profits and stock prices, https://news.ycombinator.com/item?id=46467946
AI-enabled wearables (watch, glass, headphones, pen, pendant) seek to replace mobile phones for a subset of consumer computing. Under normal circumstances, that would be unlikely. But high memory prices may make wearables and ambient computing more "competitive" with personal computing.
One way to outlast siege tactics would be for "old" personal computers to become more valuable than "new" non-personal gadgets, so that ambient computers never achieve mass consumer scale, or price deflation that enabled PC and mobile revolutions.
I still did not get very clearly what can and can’t zed, open code and other do to use max plan? Developers want to use these 3p client and pay you 200 a month, why are you pissing us off. I understand some abuser exists but you will never really be possible to ban them 100%, technically.
Very poor communication, despite some bit of reasonable intention, could be the beginning of the end for Claude Code.
> Developers want to use these 3p client and pay you 200 a month, why are you pissing us off
Presumably because it costs them more than $200 per month to sell you it. It's a loss leader to get you into their ecosystem. If you won't use their ecosystem, they'd rather you just go over to OpenAI.
they lose money on 200/month plan, maybe even quite a bit. So that plan only exist to subsidize their editor.
Could be about the typical "all must be under our control" power fantasies companies have.
But if there really is "no moat" and open model will be competitive just in a matter of time then having "the" coding editor might be majorly relevant for driving sales. Ironically they seem to already have kind lost that if what some people say about ClaudeCode vs. OpenCode is true...
It calculates tokens & public API pricing. But also Anthropic models are generally more expensive than others, so I guess its sort of 'self made' value? Some of it?
Honestly I think Claude Code enjoyed an "accidental" success much like ChatGPT; Anthropic engineers have said they never though this thing could catch on.
But being first doesn't mean you're necessarily the best. Not to mention, they weren't the first anyway (aider was).
Anthropic showed their true colors with their sloppy switch to using Claude Code for training data. They can absolutely do what they want but they have completely destroyed any reason for me to consider them fundamentally better than their competitors.
I signed up to Claude Pro when I figured out I could use it on opencode, so I could start things on Sonnet/Opus on plan mode and switch to cheaper models on build mode. Now that I can't do that, I will probably just cancel my subscription and do the dance between different hosted providers during plan phase and ask for a prompt to feed into opencode afterwards.
can you point me to this claim? also last i checked trying to connect to OpenAI seems to prompt for an API key, does openAI's API key make use of the subscription quota?
just wanted to make sure before I sign up for a openAI sub
Yeah, honestly this is a bad move on anthropic's part. I don't think their moat is as big as they think it is. They are competing against opencode + ACP + every other model out there, and there are quite a few good ones (even open weight ones).
Opus might be currently the best model out there, and CC might be the best tool out of the commercial alternatives, but once someone switches to open code + multiple model providers depending on the task, they are going to have difficulty winning them back considering pricing and their locked down ecosystem.
I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.
> I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.
You went from a 5 minute signup (and 20-200 bucks per month) to probably weeks of research (or prior experience setting up workstations) and probably days of setup. Also probably a few thousand bucks in hardware.
I mean, that's great, but tech companies are a thing because convenience is a thing.
I like how I can cycle through agents in OpenCode using tab. In CC all my messages get interpreted by the "main" agent; so summoning a specific agent still wastes main agent's tokens. In OpenCode, I can tab and suddenly I'm talking to a different agent; no more "main agent" bs.
I wonder how will this affect future Anthropic products, if prior art/products exist that have already been built using
claude.
If this is to only limit knowledge distillation for training new models or people
Copying claude code specifically or preventing max plan creds used as API replacement, they could properly carve exceptions rather than being too broad which risks turning away new customers for fear of (future) conflict
Sounds like standard terms from lawyers – not very friendly to customers, very friendly to company – but is it particularly bad here?
I remember when I was part of procuring an analytics tool for a previous employer and they had a similar clause that would essentially have banned us from building any in-house analytics while we were bound by that contract.
> Sounds like standard terms from lawyers – not very friendly to customers, very friendly to company – but is it particularly bad here?
Compilers don't come with terms that prevent you from building competing compilers. IDEs don't prevent you from writing competing IDEs. If coding agents are supposed to be how we do software engineering from now on, yeah, it's pretty bad.
This message from the Zed discord (from a zed staffer) puts it clearly, I think:
“….you can use Claude code in Zed but you can’t hijack the rate limits to do other ai stuff in zed.”
This was a response to my asking whether we can use the Claude Max subscription for the awesome inline assistant (Ctl+Enter in the editor buffer) without having to pay for yet another metered API.
The answer is no, the above was a response to a follow up.
An aside - everyone is abuzz about “Chat to Code” which is a great interface when you are leaning toward never or only occasionally looking at the generated code. But for writing prose? It’s safe to say most people definitely want to be looking at what’s written, and in this case “chat” is not the best interaction. Something like the inline assistant where you are immersed in the writing is far better.
Doesn't this make using Claude Agents SDK dangerous?
Suppose I wrote custom agent which performs tasks for a niche industry, wouldn't it be considered as "building a competing service", because their Service is performing Agentic tasks via Claude Code
This is highly monopolistic action in my opinion from Anthropic which actively feel the most hostile towards developers.
This really shouldn't be the direction Anthropic should even go about. It is such a negative direction to go through and they could've instead tried to cooperate with the large open source agents and talking with them/communicating but they decide to do this which in the developer community is met with criticism and rightfully so.
I think there are issues with Anthropic (and their ToS); however, banning the "harnesses" is justified. If you're relying on scraping a web UI or reverse-engineering private APIs to bypass per-token costs, it's just VC subsidy arbitrage. The consumer plan has a different purpose.
The ToS is concerning, I have concerns with Anthropic in general, but this policy enforcement is not problematic to me.
(yes, I know, Anthropic's entire business is technically built on scraping. but ideally, the open web only)
Because your subscription depends on the very API business.
Anthropic's cogs is rent of buying x amount of h100s. cost of a marginal query for them is almost zero until the batch fills up and they need a new cluster. So, API clusters are usually built for peak load with low utilization (filled batch) at any given time. Given AI's peak demand is extremely spiky they end up with low utilization numbers for API support.
Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.
Its reasonable for a company to unilaterally decide how they monetize their extra capacity, and its not unjustified to care. You are not purchasing the promise of X tokens with a subscription purchase for that you need api.
> Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.
I understand what you mean but outright removing the ability for other agents to use the claude code subscription is still really harsh
If telemetry really is a reason (Note: I doubt it is, I think the marketing/lock-ins aspect might matter more but for the sake of discussion, lets assume so that telemetry is in fact the reason)
Then, they could've simply just worked with co-ordination with OpenCode or other agent providers. In fact this is what OpenAI is doing, they recently announced a partnership/collaboration with OpenCode and are actively embracing it in a way. I am sure that OpenCode and other agents could generate telemetry or atleast support such a feature if need be
From what i have read on twitter. People were purchasing max subs and using it as a substitute for API keys for their startups. Typical scrappy startup story but this has the same bursty nature as API in temrs of concurrency and parallel requests. They used the Opencode implementation. This is probably one of the triggers because it screws up everything.
Telemetry is a reason. And its also the mentioned reason. Marketing is a plausible thing and likely part of the reason too, but lock-in etc. would have meant this would have come way sooner than now. They would not even be offering an API in that case if they really want to lock people in. That is not consistent with other actions.
At the same time, the balance is delicate. if you get too many subs users and not enough API users, then suddenly the setup is not profitable anymore. Because there is less underused capacity available to direct subs users to. This probably explains a part of their stance too, and why they havent done it till now. Openai never allowed it, and now when they do, they will make more changes to the auth setup which claude did not. (This episode tells you how duct taped whole system was at ant. They used the auth key to generate a claude code token, and just used that to hit the API servers).
Demand-aggregation allows the aggregator to extract the majority of the value. ChatGPT the app has the biggest presence, and therefore model improvements in Claude will only take you so far. Everyone is terrified of that. Cursor et al. have already demonstrated to model providers that it is possible to become commoditized. Therefore, almost all providers are seeking to push themselves closer to the user.
This kind of thing is pretty standard. Nobody wants to be a vendor on someone else's platform. Anthropic would likely not complain too much about you using z.ai in Claude Code. They would prefer that. They would prefer you use gpt-5.2-high in Claude Code. They would prefer you use llama-4-maverick in Claude Code.
Because regardless of how profitable inference is, if you're not the closest to the user, you're going to lose sooner or later.
Because Claude Code is not a profitable business, it's a loss leader to get you to use the rest of their token inference business. If you were to pay for Claude Code by using the normal API, it would be at least 5x the cost, if not more.
he may not be entirely correct, but Claude Code plans are significantly better than the API plan, 100$ plan may not be as cost effective but for 18$ you can get like 5x usage of the API plan.
I've seen dozens of "experts" all over the internet claim that they're "subsidizing costs" with the coding plans despite no evidence whatsoever. Despite the fact that various sources from OpenAI, Deepseek, model inference providers have suggested the contrary, that inference is very profitable with very high margins.
How am I gonna give you exact price savings, when on $18 amount of work you can do it is variable, while $100 on API only goes a limited amount. You can exhaust $100 on API in one work day easily. On $18 plan the limit resets daily or 12hrs, so you can keep coming back. If API pricing is correct, which it looks like because all top models have similar costs, then it is to believe that monthly plans are subsidised.
And if inference is so profitable why is OpenAI losing 100B a year
> This is why the supported way to use Claude in your own tools is via the API. We genuinely want people building on Claude, including other coding agents and harnesses, and we know developers have broad preferences for different tool ergonomics.
If you're a maintainer of a third-party tool and want to chat about integration paths, my DMs are open.
And the linked tweet says that such integration is against their terms.
The highlighted term says that you can't use their services to develop a competing product/service. I don't read that as the same as integrating their API into a competing product/service. It does seem to suggest you can't develop a competitor to Claude Code using Claude Code, as the title says, which is a bit silly, but doesn't contradict the linked tweet.
I suspect they have this rule to stop people using Claude to train other models, or competitors testing outputs etc, but it is silly in the context of Claude Code.
This whole situation is getting out of hand. With the development speed AI has it is a matter of time to have a competitor that has 80% what CC does and it is going to be good enough for most of us. Trying Windows the way into this category by Anthropic is not the smartest move.
OpenCode's amazing. I sometimes use it when I want an agent where I dont want to sign up or anything. It can just work without any sign up, just npx opencode (or any valid like pnpx,bunx etc.)
I don't pay for any AI subscription. I just end up building single file applications but they might not be that good sometimes so I did this experiment where I ask gemini in aistudio or chatgpt or claude and get files and end up just pasting it in opencode and asking it to build the file structure and paste it in and everything
If your project includes setting up something say sveltekit or any boilerplate project and contains many many files I recommend this workflow to get the best of both worlds for essentially free
To be really honest, I just end up mostly creating single page main.go files for my use cases from the website directly and I really love them a lot. Sure the code understandability takes a bit of hit but my projects usually stay around ~600 to 1000 at max 2000 lines and I really love this workflow/ for me personally, its just one of the best.
When I try AI agents, they really end up creating 25 files or 50 files and end up overarchitecting. I use AI for prototypes purposes for the most part and that overarchitecture actually hurts.
Mostly I just create software for my own use cases though. Whenever I face any problem that I find remotely interesting that impacts me, I try to do this and this trend has worked remarkably well for me for 0 dollars spent.
I absolutely love the amount of control opencode gives me. And ability to use Codex, Qwen and Gemini models gives it a massive advantage over Claude Code.
I am _much_ more interested in i. building cool software for other things and ii. understanding underlying underlying models than building "better claude code".
Yup, I’ve been crowing about these customer noncompetes for years now and it’s clear Anthropic has one of the worst ones. The real kicker is, since Claude Code can do anything, you’re technically not allowed to use it for anything, and everyone just depends on Anthropic not being evil
The corporate hypocrisy is reaching previously unseen levels. Ultra-wealthy thieves who got rich upon stealing a dragon horde worth of property are now crying foul about people following the same "ideals". What an absolute snowflakes. LLM sector is the only one where I'm rooting for Chinese corporations trouncing the incumbents, thus demonstrating FAFO principle in practice.
I think this is kind of a nothingburger. This reads like a standard clause in any services contract. I also cannot (without a license):
1. Pay for a stock photo library and train an image model with it that I then sell.
2. Use a spam detection service, train a model on its output, then sell that model as a competitor.
3. Hire a voice actor to read some copy, train a text to speech model on their voice, then sell that model.
This doesn't mean you can't tell Claude "hey, build me a Claude Code competitor". I don't even think they care about the CLI. It means I can't ask Claude to build things, then train a new LLM based on what Claude built. Claude can't be your training data.
There's an argument to be made that Anthropic didn't obtain their training material in an ethical way so why should you respect their intellectual property? The difference, in my opinion, is that Anthropic didn't agree to a terms of use on their training data. I don't think that makes it right, necessarily, but there's a big difference between "I bought a book, scanned it, learned its facts, then shredded the book" and "I agreed to your ToS then violated it by paying for output that I then used to clone the exact behavior of the service."
That's empirically false. If it was true, there wouldn't be any ongoing litigation about whether it's allowed or not. It's a legal gray area because there specifically isn't a law that says whether you're allowed or not allowed to legally purchase a text and sell information about the text or facts from the text as a service.
In fact, there's exactly nothing illegal about me replacing what Anthropic is doing with books by me personally reading the books and doing the job of the AI with my meat body (unless I'm quoting the text in a way that's not fair use).
But that's not even what's at issue here. Anthropic is essentially banning the equivalent of people buying all the Stephen King books and using them to start a service that specifically makes books designed to replicate Stephen King writing. Claude being about to talk about Pet Sematary doesn't compete with the sale of Pet Sematary. An LLM trained on Stephen King books with the purpose of creating rip-off Stephen King books arguably does.
No they actually do, basically they provide claude code subscription model for 200$ which is a loss making leader and you can realistically get value of even around 300-400$ per month or even more (close to 1000$) if you were using API
So why do they bear the loss, I hear you ask?
Because it acts as a marketing expensive for them. They get so much free advertising in sense from claude code and claude code is still closed source and they enforce a lot of restrictions which other mention (sometimes even downstream) and I have seen efforts of running other models on top of claude but its not first class citizen and there is still some lock-in
On the other hand, something like opencode is really perfect and has no lock-in and is absolutely goated. Now those guys and others had created a way that they could also use the claude subscription itself via some methods and I think you were able to just use OAuth sign up and that's about it.
Now it all goes great except for anthropic because the only reason Anthropic did this was because they wanted to get marketing campaign/lock-in which OpenCode and others just stopped, thus they came and did this. opencode also prevented any lockins and it has the ability to swap models really easily which many really like and thus removing dependence on claude as a lock-in as well
I really hate this behaviour and I think this is really really monopolistic behaviour from Anthropic when one comes to think about it.
That's not at all what they're saying, though. It's just nonsense to say "you can't recreate our CLI with Claude Code" because you can get literally any other competent AI to do it with a roughly comparable result. You don't need to add this clause to the TOS to protect a CLI that doesn't do anything other than calling some back-end—there's no moat here.
This is the modus operandi of every AI company so far.
OpenAI hoovered up everything they could to train their model with zero shits about IP law. But the moment other models learned from theirs they started throwing tantrums.
Imagine a world where Google has its product shit together and didn’t publish the AIAYN paper, and has the monopoly on LLMs and they are a black box to all outsiders. It’s terrifying. Thankfully we have extreme competition in the space to mitigate anything like this. Let’s hope it stays that way.
Does it follow then, that we should socialize our losses and ignore their TOS? It looks like yet again - fortune favors those who ask forgiveness later.
It would be like if Carnegie Steel somehow could have prohibited people from buying their steel in order to build a steel mill.
And moreover in this case the entire industry as it exists today wouldn't exist without massive copyright infringement, so it's double-extra ironic because Anthropic came into existence in order to make money off of breaking other people's rules, and now they want to set up their own rules.
the controversy is related to the recent opencode bans (for using claude code oauth to access Max rather than API) and also Anthropic recently turning off access to claude for xAI researchers (which was mostly through Cursor)
Can anyone read? The text doesn't mention Claude Code or anything like it at all.
I swear to god everyone is spoiling for a fight because they're bored. All these AI companies have this language to try to prevent people from "distilling" their model into other models. They probably wrote this before even making Claude Code.
Worst case scenario they cancel your account if they really want to, but almost certainly they'll just tweak the language once people point it out.
I was all set to be pissed off, "you can't tell me what I can make with your product once you've sold it to me!" but no... This outrage bait hinges on the definition of "use"
You can use Claude Code to write code to make a competitor for Claude Code. What you cannot do is reverse engineer the way the Claude Code harness uses the API to build your own version that taps into stuff like the max plan. Which? makes sense?
From the thread:
> A good rule of thumb is, are you launching a Claude code oauth screen and capturing the token. That is against terms of service.
under the ToS, no - you cannot use claude code to make a competitor to claude code. but you’re right that that appears to mostly be unenforced.
that said, it is absolutely being enforced against other big model labs who are mostly banned from using claude.
> You can use Claude Code to write code to make a competitor for Claude Code.
No, the ToS literally says you cannot.
What's worse is that Anthropic also goes after customers who bought their services through intermediaries, even if the service was API (not subscription).
If you use Claude models through CURSOR, Anthropic still applies its own policies on usage. Just recently they cut off xAI employees' access to Claude models on Cursor [0]. X has threatened to ban Anthropic from X.
[0]: https://x.com/kyliebytes/status/2009686466746822731?s=46
Does it mean it will be outright banned in China? Otherwise I see DeepCode coming...
It already is. https://www.anthropic.com/news/updating-restrictions-of-sale...
And Anthropic really goes out their way in banning China. Other model providers do a decent job at restricting access in general but look away when someone tries to circumvent those restrictions. But Claude uses extra mechanisms to make it hard. And the CEO was on record about China issues: https://www.cnbc.com/2025/05/01/nvidia-and-anthropic-clash-o...
Yes, I think this makes sense. I think if you are paying by token w/ an API key, then you're good to go, but if you're hijacking their login system then that's a different story.
The trick is to use Codex to write a Claude Code clone, Claude Code to write an Antigravity clone, and Antigravity to write a Codex clone.
Good luck catching me, I'm behind 7 proxies.
or you just do it and be in the EU
it's a clear anti-competive clause by a dominant market leader, such clauses tend to be void AFIK
It might suffice to just make it look like you’re European to keep their goons from harassing you, which hasn’t happened yet but will, because that’s how these stories always end. Get a PO Box for a credit card and VPN in through Europe.
I think one might argue even a VPN might be enough. Theoretically someone might be European and can have American card or any other countries cards and it would generally be okay.
So the only thing you kind of need to figure out is VPN and ProtonVPN provides free vpn service which does include EU servers access as well
I wonder if Claude Code or these AI services block VPN access though.
If they do, ehh, just buy a EU cheap vps (hetzner my beloved) and call it a day plus you also get a free dev box which can also run your code 24x7 and other factors too.
Who is the dominant market leader? OpenAI dwarfs Anthropic.
Most people still haven’t heard of Anthropic/Claude.
(For the record, I use Claude code all day long. But it’s still pretty niche outside of programming.)
What do you mean by void? Sure you cannot be sued for writing clone, in any country. All they can do is ban account and I think they can ban any account in EU.
This whole thing got blown out of proportion because the devs of third party harnesses that use the oauth API never disclosed that they were already actively sidestepping what is a very obvious message that the oauth API is for Claude Code only. What changed recently is that they added more restrictions for the shape of the payloads it accepts, not that they just started adding restrictions for the first time.
TLDR You cannot reverse engineer the oauth API without encountering this message:
https://tcdent-pub.s3.us-west-2.amazonaws.com/cc_oauth_api_e...
not really. Here's their own product clarifying:
Based on the terms, Section 3, subsection 2 prohibits using Claude/Anthropic's Services:
So you can't use Claude to build your own chatbot that does anything remotely like Claude, which would be, basically any LLM chatbot.This seems reasonable at first glance, but imagine applying it to other development tools — "You can't use Xcode/Visual Studio/IntelliJ to build a commercial IDE", "You can't use ICC/MSVC to build a commercial C/C++ compiler", etc.
In this case it’s “You can’t use our technology to teach your thinking machine from our stealing of other people’s work, because our AI is just learning stuff, not stealing, and you are stealing from us, because we say so.”
When it comes to AI stealing all IP in the world, I really don't give a crap.
What I do give a crap about is the AI companies being little bitches when you politely pilfer what they have already snatched. Their hypocrisy is unlimited.
yes
but also the prohibition goes way further as it's not limited to training competing LLMs but also for programming any of the plumbing etc. around it ....
> This restriction is specifically about competitive use - you cannot use Claude to build products that compete with Anthropic's offerings.
I am not a lawyer, regardless of the list of examples below(I have been told examples in contracts and TOS are a mixed bag for enforceability), this text says that if anthropic decides to make a product like yours you have to stop using Claude for that product.
That is a pretty powerful argument against depending heavily on or solely on Claude.
It may or may not be enforceable in the court of law, but they'll definitely ban you if they notice you...
And I'm pretty sure ban evasion can become an issue in the court of law, even if the original TOS may not hold up
LOL, that's so much worse than I imagined.
I know we want to turn everything into a rental economy aka the financialization of everything, but this is just super silly.
I hope we're 2-3 years away, at most, from fully open source and open weights models that can run on hardware you can buy with $2000 and that can complete most things Opus 4.5 can do today, even if slower or that needs a bit more handholding.
OpenAI, a while back said their was no moat. You'll see these AI companies panic more desperately as they all realize it's true.
That's different, though. If 20 other companies can host these models, you still have to trust them. The end result should be cheap hardware that's good enough to large a solid, mature LLM that can code comparably to a fast junior dev.
A more interim way to put it is "The current moat is hardware".
> hardware you can buy with $2000
Including how much RAM?
I assume strongly, in 3 years the prices will have dropped a lot again.
Because of increased supply or reduced demand?
Rather increased supply I assume.
Only a few memory suppliers remain, after years of competition, and they have intentionally reduced NAND wafer supply to achieve record profits and stock prices, https://news.ycombinator.com/item?id=46467946
In theory, China could catch up on memory manufacturing and break the OMEC oligopoly, but they could also pursue high profits and stock prices, if they accept global shrinkage of PC and mobile device supply chains (e.g. Xiaomi pivoted to EVs), https://news.ycombinator.com/item?id=46415338#46419776 | https://news.ycombinator.com/item?id=46482777#46483079
AI-enabled wearables (watch, glass, headphones, pen, pendant) seek to replace mobile phones for a subset of consumer computing. Under normal circumstances, that would be unlikely. But high memory prices may make wearables and ambient computing more "competitive" with personal computing.
One way to outlast siege tactics would be for "old" personal computers to become more valuable than "new" non-personal gadgets, so that ambient computers never achieve mass consumer scale, or price deflation that enabled PC and mobile revolutions.
I still did not get very clearly what can and can’t zed, open code and other do to use max plan? Developers want to use these 3p client and pay you 200 a month, why are you pissing us off. I understand some abuser exists but you will never really be possible to ban them 100%, technically.
Very poor communication, despite some bit of reasonable intention, could be the beginning of the end for Claude Code.
> Developers want to use these 3p client and pay you 200 a month, why are you pissing us off
Presumably because it costs them more than $200 per month to sell you it. It's a loss leader to get you into their ecosystem. If you won't use their ecosystem, they'd rather you just go over to OpenAI.
my guess?
they lose money on 200/month plan, maybe even quite a bit. So that plan only exist to subsidize their editor.
Could be about the typical "all must be under our control" power fantasies companies have.
But if there really is "no moat" and open model will be competitive just in a matter of time then having "the" coding editor might be majorly relevant for driving sales. Ironically they seem to already have kind lost that if what some people say about ClaudeCode vs. OpenCode is true...
I'd say _yes_. This is my `npx ccusage` (reads the .claude folder) since nov 20th:
│ Total │ │ 3,884,609 │ 3,723,258 │ 215,832,272 │ 3,956,313,197 │ 4,179,753,336 │ $3150.99 │
It calculates tokens & public API pricing. But also Anthropic models are generally more expensive than others, so I guess its sort of 'self made' value? Some of it?
Only Anthropic knows but I imagine you're a significant outlier
I bet a lot of the tokens go unused each month. The per token cost is pretty high for the api access
Honestly I think Claude Code enjoyed an "accidental" success much like ChatGPT; Anthropic engineers have said they never though this thing could catch on.
But being first doesn't mean you're necessarily the best. Not to mention, they weren't the first anyway (aider was).
Anthropic showed their true colors with their sloppy switch to using Claude Code for training data. They can absolutely do what they want but they have completely destroyed any reason for me to consider them fundamentally better than their competitors.
Behold the "Democratization of software development".
Xcancel Link: https://xcancel.com/SIGKITTEN/status/2009697031422652461
Opencode is much better anyway and it doesnt change its workflow every couple weeks.
PSA - please ensure you are running OpenCode v1.1.10 or newer: https://news.ycombinator.com/item?id=46581095
I signed up to Claude Pro when I figured out I could use it on opencode, so I could start things on Sonnet/Opus on plan mode and switch to cheaper models on build mode. Now that I can't do that, I will probably just cancel my subscription and do the dance between different hosted providers during plan phase and ask for a prompt to feed into opencode afterwards.
As of yesterday OpenAI seems to explicitly allow opencode on their subscription plans.
can you point me to this claim? also last i checked trying to connect to OpenAI seems to prompt for an API key, does openAI's API key make use of the subscription quota?
just wanted to make sure before I sign up for a openAI sub
Yeah, but that would mean me giving money to Sam Altman, and that ain't happening.
Also GPT 5.2 is better than slOpus
Yeah, honestly this is a bad move on anthropic's part. I don't think their moat is as big as they think it is. They are competing against opencode + ACP + every other model out there, and there are quite a few good ones (even open weight ones).
Opus might be currently the best model out there, and CC might be the best tool out of the commercial alternatives, but once someone switches to open code + multiple model providers depending on the task, they are going to have difficulty winning them back considering pricing and their locked down ecosystem.
I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.
disagree. it is much better for anthropic to bundle than to become 'just another model provider' to opencode/other routers.
as a consumer, i do absolutely prefer the latter model - but i don't think that is the position I would want to be in if I were anthropic.
Nah, Anthropic thinks they have a moat; this is classic Apple move, but they ain't Apple.
> I went from max 20x and chatgpt pro to Claude pro and chat gpt plus + open router providers, and I have now cancelled Claude pro and gpt plus, keeping only Gemini pro (super cheap) and using open router models + a local ai workstation I built using minimax m2.1 and glam 4.7. I use Gemini as the planner and my local models as the churners. Works great, the local models might not be as good as opus 4.5 or sonnet 4.7, but they are consistent which is something I had been missing with all commercial providers.
You went from a 5 minute signup (and 20-200 bucks per month) to probably weeks of research (or prior experience setting up workstations) and probably days of setup. Also probably a few thousand bucks in hardware.
I mean, that's great, but tech companies are a thing because convenience is a thing.
On opencode you can use models which are free for unlimited use and you can pick models which only cost like $15 a month for unlimited use.
a local ai workstation
Peak HN comment
I like how I can cycle through agents in OpenCode using tab. In CC all my messages get interpreted by the "main" agent; so summoning a specific agent still wastes main agent's tokens. In OpenCode, I can tab and suddenly I'm talking to a different agent; no more "main agent" bs.
i find cursor cli significantly better than opencode right now, unfortunately.
e: for those downvoting, i would earnestly like to hear your thoughts. i want opencode and similar solutions to win.
I wonder how will this affect future Anthropic products, if prior art/products exist that have already been built using claude.
If this is to only limit knowledge distillation for training new models or people Copying claude code specifically or preventing max plan creds used as API replacement, they could properly carve exceptions rather than being too broad which risks turning away new customers for fear of (future) conflict
Sounds like standard terms from lawyers – not very friendly to customers, very friendly to company – but is it particularly bad here?
I remember when I was part of procuring an analytics tool for a previous employer and they had a similar clause that would essentially have banned us from building any in-house analytics while we were bound by that contract.
We didn't sign.
> Sounds like standard terms from lawyers – not very friendly to customers, very friendly to company – but is it particularly bad here?
Compilers don't come with terms that prevent you from building competing compilers. IDEs don't prevent you from writing competing IDEs. If coding agents are supposed to be how we do software engineering from now on, yeah, it's pretty bad.
> Sounds like standard terms from lawyers
If they were "standard" terms, then how come no other AI provider imposes them?
This message from the Zed discord (from a zed staffer) puts it clearly, I think:
“….you can use Claude code in Zed but you can’t hijack the rate limits to do other ai stuff in zed.”
This was a response to my asking whether we can use the Claude Max subscription for the awesome inline assistant (Ctl+Enter in the editor buffer) without having to pay for yet another metered API.
The answer is no, the above was a response to a follow up.
An aside - everyone is abuzz about “Chat to Code” which is a great interface when you are leaning toward never or only occasionally looking at the generated code. But for writing prose? It’s safe to say most people definitely want to be looking at what’s written, and in this case “chat” is not the best interaction. Something like the inline assistant where you are immersed in the writing is far better.
Art (prose, images, videos) is very different from code when discussing AI Agents.
Code can be objectively checked with automated tools to be correct/incorrect.
Art is always subjective.
Indeed, great way to put it
yeah, but no,
I mean they could have put _exactly_ that into their terms of service.
Hijacking rate limits is also never really legal AFIK.
Doesn't this make using Claude Agents SDK dangerous?
Suppose I wrote custom agent which performs tasks for a niche industry, wouldn't it be considered as "building a competing service", because their Service is performing Agentic tasks via Claude Code
This is highly monopolistic action in my opinion from Anthropic which actively feel the most hostile towards developers.
This really shouldn't be the direction Anthropic should even go about. It is such a negative direction to go through and they could've instead tried to cooperate with the large open source agents and talking with them/communicating but they decide to do this which in the developer community is met with criticism and rightfully so.
I think there are issues with Anthropic (and their ToS); however, banning the "harnesses" is justified. If you're relying on scraping a web UI or reverse-engineering private APIs to bypass per-token costs, it's just VC subsidy arbitrage. The consumer plan has a different purpose.
The ToS is concerning, I have concerns with Anthropic in general, but this policy enforcement is not problematic to me.
(yes, I know, Anthropic's entire business is technically built on scraping. but ideally, the open web only)
Software dev's training the model with their code making themselves obsolete is encouraged not banned.
Claude code making itself obsolete is banned.
As long as I have a Claude subscription, why do they care what harness I use to access their very profitable token inference business?
Because your subscription depends on the very API business.
Anthropic's cogs is rent of buying x amount of h100s. cost of a marginal query for them is almost zero until the batch fills up and they need a new cluster. So, API clusters are usually built for peak load with low utilization (filled batch) at any given time. Given AI's peak demand is extremely spiky they end up with low utilization numbers for API support.
Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.
Its reasonable for a company to unilaterally decide how they monetize their extra capacity, and its not unjustified to care. You are not purchasing the promise of X tokens with a subscription purchase for that you need api.
> Your subscription is supposed to use that free capacity. Hence, the token costs are not that high, hence you could buy that. But it needs careful management that you dont overload the system. There is a claude code telemetry which identifies the request as lower priority than API (and probably decide on queueing + caching too). If your harness makes 10 parallel calls everytime you query, and not manage context as well as claude code, its overwhelming the system, degrading the performance for others too. And if everyone just wants to use subscription and you have no api takers, the price of subscription is not sustainable anyway. In a way you are relying on others' generosity for the cheap usage you get.
I understand what you mean but outright removing the ability for other agents to use the claude code subscription is still really harsh
If telemetry really is a reason (Note: I doubt it is, I think the marketing/lock-ins aspect might matter more but for the sake of discussion, lets assume so that telemetry is in fact the reason)
Then, they could've simply just worked with co-ordination with OpenCode or other agent providers. In fact this is what OpenAI is doing, they recently announced a partnership/collaboration with OpenCode and are actively embracing it in a way. I am sure that OpenCode and other agents could generate telemetry or atleast support such a feature if need be
From what i have read on twitter. People were purchasing max subs and using it as a substitute for API keys for their startups. Typical scrappy startup story but this has the same bursty nature as API in temrs of concurrency and parallel requests. They used the Opencode implementation. This is probably one of the triggers because it screws up everything.
Telemetry is a reason. And its also the mentioned reason. Marketing is a plausible thing and likely part of the reason too, but lock-in etc. would have meant this would have come way sooner than now. They would not even be offering an API in that case if they really want to lock people in. That is not consistent with other actions.
At the same time, the balance is delicate. if you get too many subs users and not enough API users, then suddenly the setup is not profitable anymore. Because there is less underused capacity available to direct subs users to. This probably explains a part of their stance too, and why they havent done it till now. Openai never allowed it, and now when they do, they will make more changes to the auth setup which claude did not. (This episode tells you how duct taped whole system was at ant. They used the auth key to generate a claude code token, and just used that to hit the API servers).
Demand-aggregation allows the aggregator to extract the majority of the value. ChatGPT the app has the biggest presence, and therefore model improvements in Claude will only take you so far. Everyone is terrified of that. Cursor et al. have already demonstrated to model providers that it is possible to become commoditized. Therefore, almost all providers are seeking to push themselves closer to the user.
This kind of thing is pretty standard. Nobody wants to be a vendor on someone else's platform. Anthropic would likely not complain too much about you using z.ai in Claude Code. They would prefer that. They would prefer you use gpt-5.2-high in Claude Code. They would prefer you use llama-4-maverick in Claude Code.
Because regardless of how profitable inference is, if you're not the closest to the user, you're going to lose sooner or later.
Because Claude Code is not a profitable business, it's a loss leader to get you to use the rest of their token inference business. If you were to pay for Claude Code by using the normal API, it would be at least 5x the cost, if not more.
source: pulled out of your a**
he may not be entirely correct, but Claude Code plans are significantly better than the API plan, 100$ plan may not be as cost effective but for 18$ you can get like 5x usage of the API plan.
I've seen dozens of "experts" all over the internet claim that they're "subsidizing costs" with the coding plans despite no evidence whatsoever. Despite the fact that various sources from OpenAI, Deepseek, model inference providers have suggested the contrary, that inference is very profitable with very high margins.
How am I gonna give you exact price savings, when on $18 amount of work you can do it is variable, while $100 on API only goes a limited amount. You can exhaust $100 on API in one work day easily. On $18 plan the limit resets daily or 12hrs, so you can keep coming back. If API pricing is correct, which it looks like because all top models have similar costs, then it is to believe that monthly plans are subsidised.
And if inference is so profitable why is OpenAI losing 100B a year
https://xcancel.com/SIGKITTEN/status/2009697031422652461
This tweet reads as nonsense to me
It's quoting:
> This is why the supported way to use Claude in your own tools is via the API. We genuinely want people building on Claude, including other coding agents and harnesses, and we know developers have broad preferences for different tool ergonomics. If you're a maintainer of a third-party tool and want to chat about integration paths, my DMs are open.
And the linked tweet says that such integration is against their terms.
The highlighted term says that you can't use their services to develop a competing product/service. I don't read that as the same as integrating their API into a competing product/service. It does seem to suggest you can't develop a competitor to Claude Code using Claude Code, as the title says, which is a bit silly, but doesn't contradict the linked tweet.
I suspect they have this rule to stop people using Claude to train other models, or competitors testing outputs etc, but it is silly in the context of Claude Code.
This whole situation is getting out of hand. With the development speed AI has it is a matter of time to have a competitor that has 80% what CC does and it is going to be good enough for most of us. Trying Windows the way into this category by Anthropic is not the smartest move.
> that has 80% what CC does
OpenCode already does 120% of what CC does.
OpenCode's amazing. I sometimes use it when I want an agent where I dont want to sign up or anything. It can just work without any sign up, just npx opencode (or any valid like pnpx,bunx etc.)
I don't pay for any AI subscription. I just end up building single file applications but they might not be that good sometimes so I did this experiment where I ask gemini in aistudio or chatgpt or claude and get files and end up just pasting it in opencode and asking it to build the file structure and paste it in and everything
If your project includes setting up something say sveltekit or any boilerplate project and contains many many files I recommend this workflow to get the best of both worlds for essentially free
To be really honest, I just end up mostly creating single page main.go files for my use cases from the website directly and I really love them a lot. Sure the code understandability takes a bit of hit but my projects usually stay around ~600 to 1000 at max 2000 lines and I really love this workflow/ for me personally, its just one of the best.
When I try AI agents, they really end up creating 25 files or 50 files and end up overarchitecting. I use AI for prototypes purposes for the most part and that overarchitecture actually hurts.
Mostly I just create software for my own use cases though. Whenever I face any problem that I find remotely interesting that impacts me, I try to do this and this trend has worked remarkably well for me for 0 dollars spent.
I absolutely love the amount of control opencode gives me. And ability to use Codex, Qwen and Gemini models gives it a massive advantage over Claude Code.
I am _much_ more interested in i. building cool software for other things and ii. understanding underlying underlying models than building "better claude code".
if its so easy make software, why wouldnt you also make a claude code?
Ok
Yup, I’ve been crowing about these customer noncompetes for years now and it’s clear Anthropic has one of the worst ones. The real kicker is, since Claude Code can do anything, you’re technically not allowed to use it for anything, and everyone just depends on Anthropic not being evil
Ever get involved in the React-is-Facebook conversation?
Is this a standard tech ToU item?
Is this them saying that their human developers don’t add much to their product beyond what the AI does for them?
Imagine if Visual Studio said "you can't use VS to build another IDE".
Imagine if Visual Studio said "you can't use VS to build any product or service which may compete with a Microsoft product or service"
does OpenAI have the same restriction?
One day all programs will belong to the AI that made them, which was trained in a time before we forgot how to program.
The corporate hypocrisy is reaching previously unseen levels. Ultra-wealthy thieves who got rich upon stealing a dragon horde worth of property are now crying foul about people following the same "ideals". What an absolute snowflakes. LLM sector is the only one where I'm rooting for Chinese corporations trouncing the incumbents, thus demonstrating FAFO principle in practice.
I think this is kind of a nothingburger. This reads like a standard clause in any services contract. I also cannot (without a license):
1. Pay for a stock photo library and train an image model with it that I then sell.
2. Use a spam detection service, train a model on its output, then sell that model as a competitor.
3. Hire a voice actor to read some copy, train a text to speech model on their voice, then sell that model.
This doesn't mean you can't tell Claude "hey, build me a Claude Code competitor". I don't even think they care about the CLI. It means I can't ask Claude to build things, then train a new LLM based on what Claude built. Claude can't be your training data.
There's an argument to be made that Anthropic didn't obtain their training material in an ethical way so why should you respect their intellectual property? The difference, in my opinion, is that Anthropic didn't agree to a terms of use on their training data. I don't think that makes it right, necessarily, but there's a big difference between "I bought a book, scanned it, learned its facts, then shredded the book" and "I agreed to your ToS then violated it by paying for output that I then used to clone the exact behavior of the service."
When you buy a book you’re entering into a well-trodden ToS which is absolutely broken by scanning and/or training.
That's empirically false. If it was true, there wouldn't be any ongoing litigation about whether it's allowed or not. It's a legal gray area because there specifically isn't a law that says whether you're allowed or not allowed to legally purchase a text and sell information about the text or facts from the text as a service.
In fact, there's exactly nothing illegal about me replacing what Anthropic is doing with books by me personally reading the books and doing the job of the AI with my meat body (unless I'm quoting the text in a way that's not fair use).
But that's not even what's at issue here. Anthropic is essentially banning the equivalent of people buying all the Stephen King books and using them to start a service that specifically makes books designed to replicate Stephen King writing. Claude being about to talk about Pet Sematary doesn't compete with the sale of Pet Sematary. An LLM trained on Stephen King books with the purpose of creating rip-off Stephen King books arguably does.
> I don't even think they care about the CLI
No they actually do, basically they provide claude code subscription model for 200$ which is a loss making leader and you can realistically get value of even around 300-400$ per month or even more (close to 1000$) if you were using API
So why do they bear the loss, I hear you ask?
Because it acts as a marketing expensive for them. They get so much free advertising in sense from claude code and claude code is still closed source and they enforce a lot of restrictions which other mention (sometimes even downstream) and I have seen efforts of running other models on top of claude but its not first class citizen and there is still some lock-in
On the other hand, something like opencode is really perfect and has no lock-in and is absolutely goated. Now those guys and others had created a way that they could also use the claude subscription itself via some methods and I think you were able to just use OAuth sign up and that's about it.
Now it all goes great except for anthropic because the only reason Anthropic did this was because they wanted to get marketing campaign/lock-in which OpenCode and others just stopped, thus they came and did this. opencode also prevented any lockins and it has the ability to swap models really easily which many really like and thus removing dependence on claude as a lock-in as well
I really hate this behaviour and I think this is really really monopolistic behaviour from Anthropic when one comes to think about it.
Theo's video might help in this context: https://www.youtube.com/watch?v=gh6aFBnwQj4 (Anthropic just burned so much trust...)
That's not at all what they're saying, though. It's just nonsense to say "you can't recreate our CLI with Claude Code" because you can get literally any other competent AI to do it with a roughly comparable result. You don't need to add this clause to the TOS to protect a CLI that doesn't do anything other than calling some back-end—there's no moat here.
I find it slightly ironic that Anthropic benefits from ignoring intellectual property but then tries to enforce it on their competitors.
How would they even detect that you used CC on a competitor? There's surely no ethical reason to not do it, it seems unenforceable.
This is the modus operandi of every AI company so far.
OpenAI hoovered up everything they could to train their model with zero shits about IP law. But the moment other models learned from theirs they started throwing tantrums.
they just ask LLM to backdoor report if anyone asks to build something they dont want. Its a massive surveillance issue.
They know everything you do with Claude code since everything goes through their servers
We sell you our hammer, but you are prohibited from using it to make your own hammer?
Imagine a world where Google has its product shit together and didn’t publish the AIAYN paper, and has the monopoly on LLMs and they are a black box to all outsiders. It’s terrifying. Thankfully we have extreme competition in the space to mitigate anything like this. Let’s hope it stays that way.
AI is built by violating all rules and moral codes. Now they want rules and moral code to protect them.
Might makes right.
You're just now learning about corporate capitalism?
Its always been socialize the losses, and capitalize the gains. And all the while, legislating rules to block upstart companies.
Its never ever been "fair". He who has the most gold makes the rules.
Does it follow then, that we should socialize our losses and ignore their TOS? It looks like yet again - fortune favors those who ask forgiveness later.
This is extra ridiculous even for "capitalism".
It would be like if Carnegie Steel somehow could have prohibited people from buying their steel in order to build a steel mill.
And moreover in this case the entire industry as it exists today wouldn't exist without massive copyright infringement, so it's double-extra ironic because Anthropic came into existence in order to make money off of breaking other people's rules, and now they want to set up their own rules.
Related:
Anthropic blocks third-party use of Claude Code subscriptions
https://news.ycombinator.com/item?id=46549823
"You are not allowed to use words found in our book to write your own book if you read our book."
Anthropic has just entered the "for laying down and avoiding" category.
"We stole the whole Internet, but do not dare to steal us”
This falls under "lmao even" right? Like, come on, the entire business model of most generative AI plays right now hinges on IP theft.
Is this targeted at cursor?
the controversy is related to the recent opencode bans (for using claude code oauth to access Max rather than API) and also Anthropic recently turning off access to claude for xAI researchers (which was mostly through Cursor)
Claude Code, make a Claude Code competitor. Make no mistakes.
Lol. Next will be, "Replacing our CEOs with AI is banned".
Can anyone read? The text doesn't mention Claude Code or anything like it at all.
I swear to god everyone is spoiling for a fight because they're bored. All these AI companies have this language to try to prevent people from "distilling" their model into other models. They probably wrote this before even making Claude Code.
Worst case scenario they cancel your account if they really want to, but almost certainly they'll just tweak the language once people point it out.