It is a platform to create MCP servers from API endpoints, and then chat with them without having to use Claude’s clunky integration process. It is simple and complete.
"error, please clear and try again: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': 'This request would exceed the rate limit for your organization (67777b05-661c-4183-aa19-ec6e299f95ac) of 50,000 input tokens per minute. }}
This is a very bad idea buddy. Maybe try letting users set their API tokens.
Does anyone use postman here still? It was very bloated and not great at some basic things I wanted to do quickly. It felt pretty closed off and proprietary when I just wanted to safe some of my queries in gif for my QA team.
My platform goes beyond being an automatic wrapper of an API and lets you specify to the model, in natural language, how it should parse inputs and outputs. I find LLMs are very responsive to this type of specification, and to the best of my knowledge no one is trying this yet.
You also don’t seem to offer a simple chat-based client.
Please note that posting the same thing multiple times in succession is frowned upon here (but congrats getting to the first page!)
I’m also confused by the title – why “Postman”? I do know about Postman the HTTP client, but I don’t get the parallel here.
This platform is an mcp client and mcp server creator tool. You make the mcp servers from APIs. It is a “Postman” tool in two ways
andes314, can you expand on how you see this as Postman for MCP?
It is a platform to create MCP servers from API endpoints, and then chat with them without having to use Claude’s clunky integration process. It is simple and complete.
If it enters hug of death, please try Claude client to test it! Anthropic allows me only 30req/s at the moment
How much are you spending on tokens right now?
Less than you’d think
"error, please clear and try again: Error code: 429 - {'type': 'error', 'error': {'type': 'rate_limit_error', 'message': 'This request would exceed the rate limit for your organization (67777b05-661c-4183-aa19-ec6e299f95ac) of 50,000 input tokens per minute. }}
This is a very bad idea buddy. Maybe try letting users set their API tokens.
[dead]
Does anyone use postman here still? It was very bloated and not great at some basic things I wanted to do quickly. It felt pretty closed off and proprietary when I just wanted to safe some of my queries in gif for my QA team.
My platform goes beyond being an automatic wrapper of an API and lets you specify to the model, in natural language, how it should parse inputs and outputs. I find LLMs are very responsive to this type of specification, and to the best of my knowledge no one is trying this yet.
You also don’t seem to offer a simple chat-based client.