I’ve been using and paying for Claude the AI by Anthropic for a while now and there’s some things that bug me about it.
Firstly, I’ll mention that I love Claude. The output is detailed and useful.
I use Claude for thinking through hard problems. Sometimes directly, sometimes in Perplexity or Cursor.
I OFTEN use Claude to summarise books. I’ve usually read the books and want something to both read as a reminder. As per the Brain Rule “Repeat to remember” also known as Spaced Repetition.
I’ll go back and read the summary, usually whenever I’m about to recommend the book, podcast episode or whatever it is I’m recommending, which is the other main reason I use it for summarising.
To entice people to listen to / read the whole thing, for them to quickly know if it’s just not interesting, or to have at least enough of an idea of what I’m talking about that I can have a somewhat useful conversation.
Some complaints:
1. Needs a useful print stylesheet
This should be relatively easy. Create some CSS with a print media query that lets me see THE WHOLE Conversation.
Right now if I go to print the chat, I’ll only get a single page, not multiple pages, because of the way the framing and scrolling works.
This is a pain. When I have needed to create a PDF of the whole chat (e.g multiple questions and answers) I usually have to spend some time in the Browser Inspector trying to find the correct section that has the Overflow CSS property set to hidden or maybe I have to move the inner chat section to something closer to the main <body> tag and remove some of the wrapping HTML. I don’t remember exactly and fumble each time.
A good hour of work from the Claude UI/UX team could make this pain go away.
A chat I had with Claude about the Rules for Rulers is a small example.

2. Needs the ability to Export the whole Chat
Claude has a Copy button, however it’s for just the one answer, not the whole chat.
This makes it harder for me to share the discussion, be that with another person, on my blog, or even with another AI.
It means I have to copy and paste each of my questions and each of Claude’s answers, which can be burdensome, especially when compared with Perplexity’s export to Markdown or PDF or DocX functionality which is ideal.

3. Uploading a large file silently fails
I’ve done this a few times. I’ve attempted to give an eBook or text file to Claude that was say 700KB of text and it would sit there processing for a little while then my attachment would just disappear without any error message.
The main issue is that there’s no error message. The other issue is that it’s a vague issue because the browser’s console.log says that there’s an issue with setting the local storage, so it hints that it’s not an LLM context window issue but an implementation issue.
4. Can Claude have a calculator MCP by default?
The Model Context Protocol seems awesome. Reading the Pragmatic Engineer article on it, I see how it solves a bunch of different problems rather well (except for some security issues).
What I’d love to see is that by default, on the normal Claude on the website (not the desktop version or through my IDE) I should be able to have at least a default MCP of a calculator.
We know that LLMs aren’t that good at doing maths. Neither are humans.
But giving it access to a calculator would mean it could do everything needed. Addition, subtraction, multiplication and division. But also square root, log, modulus and the like.
I’d also give it conversion capabilities, e.g from one type of weight to another (Kg to stones?), or one type of volume to another (e.g US Quarts to milliliters). You could even integrate with a currency converter and not just have current currency conversions but set at a historical time, add in an inflation adjustment capability and I could ask questions about how buying a home in USA in 1922 compares with buying a car in Australia in 2018, or whatever else I might be interested in. If I see the extended thinking mode calls on the calculator MCP then I’ll know that it’s going to be correct.
Of course, like system 1 and system 2 thinking from Thinking Fast and Slow, I’m sure Claude knows that 1+1 = 2 and some other basics, it’s probably memorised more of the times tables than most humans alive. But the calculator integration would certainly still be handy for larger calculations.
I’m sure there’s some other MCP’s that should be integration with by standard.
I asked Claude and it suggested some great ones including:
- Search/Knowledge Base – Real-time access to current information would eliminate the knowledge cutoff limitation and provide up-to-date facts.
- Code Execution Environment – A sandboxed runtime for multiple programming languages would allow testing snippets beyond just JavaScript in the analysis tool. (Doesn’t it have this already?)
- Image Generation – Native ability to create images from descriptions without requiring external tools. (integration with Midjourney?)
- Data Visualization – Advanced charting capabilities that go beyond what’s currently possible with React components.
- Document Processing – Better handling of document uploads with OCR for images and improved parsing for complex formats like PDFs. (I think it’s current conversion capabilities seem fine, it just doesn’t know about them because the upload and conversion system takes place before the LLM gets the documents?)
- Translation Services – While Claude can translate between languages, a dedicated service could provide more specialized capabilities for technical or domain-specific content. (I’ve not used Claude for translation, but didn’t think this would be a problem?)
- Calendar/Time Management – Integration with scheduling tools to help with planning or reminders. (I’m guessing the system prompt already includes the current date/time but being able to ask about what was the day of week and timezone 1,254 days ago might be useful for some answers?)
- File Storage – Persistent storage to save and retrieve information across conversations. (I’m guessing this would let it selectively read extra data as needed and it would probably be useful for those files to have enough summary info so it knows if it should read them or not? Also, isn’t this how we get close to AGI? Should this NOT be allowed?)
- Location/Mapping Services – For geographical questions, directions, or location-based information. (Yeah, this makes sense, I’ve seen Perplexity and Google’s AI answers be decent enough at these types of questions I’m probably not going to use Claude much for them, but I can see the capability being useful)
- Structured Data Access – APIs to common databases or datasets like census information, scientific data, or financial statistics. (Again, this is where search capabilities like Perplexity is great but having specific API access to systems that we generally expect it to know, like the current stock market or maybe the ability to search in YouTube and even parse the captions would probably be useful? Although I’m sure there’s some integration costs and maybe this is how some systems like news websites, Reddit and the like can get a share of inference revenue?)
5. Speed at least sometimes matters
Generally Claude is pretty good and I’m fine with it. But having experienced Cerebras AI and the insane speed it has, it seems like I should at least a few times a month (I’m on the Pro plan) get the capability of a really fast response.
Of course with a new $200/month offering maybe even more could be super fast? Wafer scale chip technology seems to be a really good technology to utilise.
Some positives:
- They’ve added chat sharing. This was going to be one of my gripes. I wanted this early on and Perplexity was the first place that really did it, but I noticed ChatGPT also had the feature when someone linked me to a chat somewhat recently. So I’m grateful they’ve caught up.
- Claude invented MCP and that is awesome.
- I really enjoy the responses from Claude, they seem to have more soul, a better vibe and seem more thought out than what I’ve gotten from Gemini.
NB: I haven’t used OpenAI’s models much so can’t comment on those.
