Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's an awesome tool! I think textclip.sh solves a different problem though (correct me if I'm wrong - this is the first I've been exposed to it). Compression at the URL/transport layer helps with sharing prompts, but the token count still hits you once the text is decompressed and fed into the model. The LLM sees the full uncompressed text.

The approach with GlyphLang is to make the source code itself token-efficient. When an LLM reads something like `@ GET /users/:id { $ user = query(...) > user }`, that's what gets tokenized (not a decompressed version). The reduced tokenization persists throughout the context window for the entire session.

That said, I don't think they're mutually exclusive. You could use textclip.sh to share GlyphLang snippets and get both benefits.





Yes, the tool here is just to share the prompt, sorry the first one I had handy is the one describing the service itself.

Here's it in plain text to be more visible:

``` textclip.sh→URL gen: #t=<txt>→copy page | ?ask=<preset>#t=→svc redirect | ?redirect=<url>#t=→custom(use __TEXT__ placeholder). presets∈{claude,chatgpt,perplexity,gemini,google,bing,kagi,duckduckgo,brave,ecosia,wolfram}. len>500→auto deflate-raw #c= base64url encoded, efficient≤16k tokens. custom redirect→local LLM|any ?param svc. view mode: txt display+copy btn+new clip btn; copy→clipboard API→"Copied!" feedback 2s. create mode: textarea+live counters{chars,~tokens(len/4),url len}; color warn: tokens≥8k→yellow,≥16k→red; url≥7k→yellow,≥10k→red. badge gen: shields.io md [!alt](target_url); ```

It uses math notation to heavily compress the representation while keeping information content relatively preserved (similarly to GlyphLang. Later, LLM can comfortably use it to describe service in detail and answer user's questions about it. Same is applicable to arbitrary information, including source code/logic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: