Thanks for reporting this. I'm assuming you are referring to the RSS feed?
The actual feed is https://idiallo.com/feed.rss in the meanwhile until I figure out the issue
No, they're referring to an error that pops up when you visit a page whose url ends in 'women-in-the-world.html'; you can click okay and still browse the page though :-)
Haha thank you. That went over my head. I dismissed that box without reading the error. But... I can neither confirm nor deny I understand what you are referring to ;)
I've restarted blogging last year, going from a handful of blog post to, publishing consistently. All content gets published on my blog first. I've seen an ~8x increase of traffic. I was affected by zero-clicks from Google's AI overview, but the bulk of my traffic now comes from RSS readers.
>the bulk of my traffic now comes from RSS readers.
I don't think this is correct unless you mean strictly the number of HTTP requests to your web server.
You were the 9th most popular blogger on HN in 2025.[0] Your post says you have about 500 readers via RSS. How can that represent more readers than people who read your posts through HN? I'd guess HN brought you about 1M visitors in 2025 based on the number of your front page posts.
You are right, my statement may be a bit misleading or incomplete. The ~500 readers are not just local rss bots, but they include aggregate RSS bots. For example, I see the feedly reporting ~200 subscribers, newsreader reporting 50 subscribers, feedbin, etc. Each of those only have between 1 to 3 ip addresses. So for each RSS bot, there are an arbitrary number of actual users reading. I can't track those accurately.
However, users can click on an RSS feed article and read it directly on my blog. These have a URL param that tells me they are coming from the feed. When an article isn't on HNs frontpage, the majority of traffic is coming from those feeds.
By the way, thank you for sharing this tool. Very insightful.
These are impressive metrics, are you able to make a living off of your 10M views?
I'm planning to leave my job this year and focus on content, mostly have been considering YouTube, but if blogging can work too, might consider that as well
Not even close to making a living! It does pay for my server though which costs $15 a month. YouTube gives you much more visibility. I'll try to compile the numbers from my single Carbon ad placement and the donations I receive from readers.
But I also don't think I have the process in place to do Blog, YouTube, Podcast and hold a full time job. Yes the job is my source of income.
Yeah I hear you. My understanding is that on youtube you can make ~2k per 1M views with the default ads. I'm hoping that I can be funded by some combination of that and something like patreon/membership/merch. But we will see, it's something I've wanted to do for years and I am getting too old to put off longer.
Im a firm believer that data collected that doesnt have a clear action associated with it is meaningless - and i couldnt think of an action i would take if my traffic goes up or down on my personal blog - but tbh i mainly blog for myself not really to build an audience, so our objectives might differ
There are some actions you can take. For example, when my traffic plummeted, I saw through my logs that search engines were trying to access my search page with questionable queries. That's when I realized I became a spam vector. I gave a better rundown through the link I shared.
Same reason why people have personal projects and share them on GitHub, it's fun to see people using / starring / interacting with your project / blog.
Just an FYI, the data collected to make those conclusion was through the server log (Apache2 in my case). So if you run your own server or VPS, you already have this information.
If you want to count every search engine bot, AI crawler, vulnerability scanner as users then that works, but these days it's basically useless to use these web server logs.
If it makes you feel better, on reddit, I shared my very first blog post about deprecating mysql_* functions in php. As a result, someone said something mean about my mother. I figured the web was full of trolls.
But that wasn't enough. Someone else wrote that my article was useless and I write at a 7th grade level. I turned off the monitor, went for a walk. I decided that blogging wasn't for me. It was time to delete my blog. I was so embarrassed.
When I came back, there was a reply to that comment. It said something like "that's a good thing, 7th grade level writing means we can all understand it easily". And that was enough to keep me going. 13 years so far.
Reddit is now just AI slop, so I don't know if that's an improvement or not over this story. I'm just glad you were able to get over that BS and engage with it all again and kept going! I gave up and never went back in around 2010, but I'm going to try again in 2026.
The problem with environments designed to make interaction low-energy and gamified like Reddit, is that it gathers just the worst people. I've got ~63k karma there, and disengaged some years ago and I can't tell you how much ditching that, twitter and Facebook improved my mental health. There's some great fun to be had there, but it's often the same thing over and over again and increasingly drowned out by utter crap. They've taken multiple actions that have destroyed the sense of community and have become a poster child for ens*tification, unfortunately.
Thanks for sharing. After reading that comment, I realized we should encourage ourselves and others (who are more or less civilized human beings) to be the kind of person who wrote "that's a good thing..." - because fighting trolls is a game with unknown results, but encouraging people works much better. It doesn't always work, though, because sometimes the platform's nature prevents it. Like on Stack Overflow, where commenting on reactions will probably get you downvoted for being off-topic.
I once spoke in favor of remote work (around 2020) and someone here on HN told me to get cancer and die, before it was flagged enough times to get out of the way.
On YouTube, I also sometimes get mean comments, though at least there the automatic moderation catches them so they don't show up publicly and I can shadowban the offenders off the channel easily. None of the content is even controversial, YouTube just attracts a lot of angry people that feel entitled to speak what's on their mind.
I wouldn't publish in an environment where blocking or banning people is difficult. They're not entitled for me to engage with their hateful drivel. My blog also doesn't have comments. At the end of the day, I will say what I want to say.
A while back, I've decided to make time tags dynamic on my website. First of all they have the title tag to show the actual date in UTC. By dynamic I mean, when something is just published, I use relative time that updates in real time. 1 second ago, 2, 3... etc. Then the minutes, then the hours, then daily.
I always get frustrated when I see a 7 months ago, or X years ago, the math is always inconsistent when they round it. So when something is more than 3 days old, I display the actual date.
A special place in hell is reserved for Stack Overflow’s recent redesign, which shows “Over a year ago” both for comments that are 13 months old and for those that are 13 years old.
> I always get frustrated when I see a 7 months ago, or X years ago, the math is always inconsistent when they round it. So when something is more than 3 days old, I display the actual date.
What especially makes me angry is dev tools doing this.
No, Github, Circle CI or Google Console [1] and others. I need to see actual timestamps on commits, PRs, merges, logs etc. not the bullshit "7hrs ago" when I'm trying to find out what broke.
[1] At one point a few years back their log viewer would show this. Someone actually implemented it because showing this is more work than actual proper timestamps.
The way that this is handled on most websites is that you show "X time ago" but you can hover over the time to get the full timestamp. For example, that's how it's handled here on Hacker News and Reddit.
Honestly, the fact that mobile browsers don't provide a way to see the contents of the title attribute is a severe UX failing on the part of the browser developers, not the website developers, who are literally using the attribute as intended.
The relative-time labeling bit HN in the ass this week during its outage (<https://news.ycombinator.com/item?id=46301921>), when hours-old comments were displayed as "n minutes ago", with n ranging from 0 to low-single-digits.
This made identifying the duration of the outage somewhat more difficult.
(HN does display the precise time in a title text for the timestamp which typically appears on hover, though you'd need to know that that's in UTC.)
I honestly don't understand why it's ever useful to die show relative timestamps over absolute ones. It's not hard to look at a date or time and understand how far back it was; it's not even that I can do the math to figure out the relative time, but that the relative version isn't even worth bothering to calculate because the absolute one is just as intuitive. If it's currently February 20XX and I see a timestamp of July 20XX-1, I know how long it's been since then, and I don't care about the number of months. If it's February and I see the timestamp "7 months ago" I don't immediately know it's July without at least doing some small amount of thinking, like "okay, a year before five months from now, so July" (which is especially silly because now I'm having to lean even more into relative times just to be able to get back to the absolute date). Seeing the exact date and also potentially know other pertinent facts like the season ("that picture was from the summer"), holidays ("it must be from the 4th of July barbecue"), etc.
Is there something I'm missing here about why people might prefer relative timestamps? I genuinely can't tell if everyone kind of universally hates them or if this is one of those things where my brain just works differently than a lot of other people.
> Is there something I'm missing here about why people might prefer relative timestamps?
I think most people are uncomfortable parsing timestamps for small-interval differences, e.g. `2025-12-19T16:28:09+00:00` for "31 seconds ago".
For larger intervals, I agree that timestamps are more useful. "1 day ago" is a particular bugbear of mine. One day meaning, 13 hours, or meaning 35 hours? Sometimes that's important!
The original advice when relative timestamps became a thing was to choose based on the activity level of the content. If new content is constantly appearing and older stuff fades out of relevance quickly, then choose relative timestamps. Otherwise, use absolute timestamps.
The worst is inconsistency, and the best is sometimes both (when presented in a discoverable and convenient way -- hover text used to be that way, but this degrades on mobile).
To clarify, I don't mean to literally imply an exact timestamp format. Showing something like "December 19, 2025 4:28 PM" or "19 December 2025 16:28" seems strictly better to me than "31 seconds ago" because it doesn't either become inaccurate quickly or require having the page update in real-time.
Posts for one thing like yours an hour ago as of this post. Sometimes people want to see how old content is at a glance. Like how long ago did someone log in?
I guess I just don't find the relative timestamp to be a more intuitive way of seeing that. If I see today's date and a time this morning, I don't need to "translate" that into an exact number of hours because "12 hours ago" isn't more meaningful to me than "this morning", an "2 minutes ago" is likely going to be wrong quickly (or require a technical measure to keep accurate, and given that the relative timestamp already arguably is more work to implement, that's now two extra things added to try to solve a problem that I don't really understand to exist in the first place).
Having thought through a bunch of different orders of magnitude of time (time in the past measured in seconds, minutes, hours, days, weeks, and years), I'm confident that I'd personally find the actual date and time to be more intuitive in every single one of them. What I'm not confident in is whether that would be the case for everyone else or not. I don't think there would be anything wrong with someone feeling differently than me, and if it turns out I'm in the minority, I wouldn't have any trouble accepting it, but it feels so fundamentally disconnected with the way I think about things that I have trouble conceiving of it other than as a hypothetical.
I disagree. When the tool promises to do something, you end up trusting it to do the thing.
When Tesla says their car is self driving, people trust them to self drive. Yes, you can blame the user for believing, but that's exactly what they were promised.
> Why didn't the lawyer who used ChatGPT to draft legal briefs verify the case citations before presenting them to a judge? Why are developers raising issues on projects like cURL using LLMs, but not verifying the generated code before pushing a Pull Request? Why are students using AI to write their essays, yet submitting the result without a single read-through? They are all using LLMs as their time-saving strategy. [0]
It's not laziness, its the feature we were promised. We can't keep saying everyone is holding it wrong.
Very well put. You're promised Artificial Super Intelligence and shown a super cherry-picked promo and instead get an agent that can't hold its drool and needs constant hand-holding... it can't be both things at the same time, so... which is it?
I don't know about being back, but it certainly isn't dead. A few years back, I used to get at least 10k readers a day. That number went down to less than 100 a day at it's lowest, I was writing 10 entries a year at most. Last year, I wrote just 4.
One thing I failed to notice was that RSS was still active. So this year, I started consistently contributing, over 150 so far, and I see RSS picking up right where it left off [0]. A lot of my blog post suck, but I write them as an observation and my current understanding of a subject. Readers have agency to skip what they don't like and only read what they like.
I hadn't looked at my feed subscriber stats in a while, turns out I had around 6,000 at the start of 2025 and I'm up to around 12,000 now - very healthy!
I use my own server-side tracking to count them - I look out for the user-agent from feed software like Feedly and pull the number out of it:
Wow. A few questions:
- I recently added RSS to my blog. The URL works but I don't advertise it with the icon. Should I?
- What do you use to track traffic?
I don't use the icon, but at the end of every article I have the "Follow me via RSS Feed" as a direct link to the RSS. As far as tracking the rss traffic, this graph is generated from my server logs. It is literally cat apache logs | grep my feed url | awk daily traffic | sort.
Note this shows me how many RSS readers have accessed my RSS daily. I can't actually track each person, although I have a report I'm working on for the end of the year.
Do you also have something like <link rel=alternate href=/atom.xml type=application/atom+xml> in your <head> element?
Things like this let me just throw homepages (or blog pages) at feed readers and they can discover all the different feeds available and I can pick one (although you really don't need more than one, generally).
I remember when ChatGPT 3.5 was going to be AGI, then 4, then 4o, etc. It's kinda like the dooms day predictions, even if they fail it's ok. Because the next one though, oh that's the real doomsday. I, for one, am waiting for a true AI Scotsman [0].
It started with GPT-2 - not sure if OpenAI really believed it, or were just hyping it up, but they initially withheld public release of GPT-2 because it was "too powerful and dangerous"...
At the time there was no obvious reason not to trust that OpenAI was trying to act for the benefit of society, per their charter, so it seemed like an abundance of caution, and this level of LLM capability was new to most of us, so it was hard to guess how dangerous it actually was...
However, in retrospect, seeing how OpenAI continues to behave, it may well have just been to get publicity.
This whole "Be warned, we're about to release something that will destroy society!" shtick seems to be a recurring thing with the AI CEOs, specifically Altman and Amodei (who switched into hardcore salesman mode about a year ago).
The latest Twitter "warning" from Altman is to claim that their AI will soon be at the level of their AI developers, and so we should be prepared (for the self-accelerating singularity I suppose). Maybe this inspired someone to write him another trillion dollar check?
It's funny just today I published an article with the solution to this problem.
If they don't bother writing the code, why should you bother reading it? Use an LLM to review it, and eventually approve it. Then of course, wait for the customer to complain, and feed the complaint back to the LLM. /s
Large LLM generated PRs are not a solution. They just shift the problem to the next person in the chain.
But how do they know it's vibe-coded? It may have a smell to it. But the author might not know it for a fact. The fact it's vibe-coded is actually irrelevant the size of the request is the main issue.
I'm not gonna make assumptions on behalf of OP, but if you have domain knowledge, you can quickly tell when a PR is vibe-coded. In a real world scenario, it would be pretty rare for someone to generate this much code in a single PR.
And if they did in fact spend 6 months painstakingly building it, it wouldn't hurt to break it down into multiple PRs. There is just so much room for error reviewing such a giant PR.
I’ve never posted a question to S.O. My infuriation is entirely gratuitous. So many times I’ve found a polite, well worded question asking exactly what I need answered, only to see it closed as off topic (and we’re talking “question about preg_match() on the PHP stackexchange” type question) or for some condescending asshole to mark it as duplicate, linking a mostly unrelated and far simpler question with no further indication why this might be at all the proper response.
Not really. LLMs are good at indexing and digesting documentation, up to and including actual source code, and answering questions about it.
And they never "Vote to close as duplicate" because somebody asked something vaguely similar 10 years ago about a completely different platform and didn't get a good answer even then.
Stack Overflow is the taxi industry to AI's Uber. We needed it at one point, but it really always sucked, and unsurprisingly some people took exception to that and built something better, or at least different.
> LLMs are good at indexing and digesting documentation, up to and including actual source code, and answering questions about it.
Requires citations not in evidence. Source code and documentation rarely co-exist, and even the best source code is not even close to well-described by documentation of the software it is a part of. I basically call BS.
SO provided the connection between natural language (primarily English) and source code. Access to source code alone doesn't do that, commented code nothwithstanding.
I don't suspect that SO alone is anywhere nearly sufficient to train LLMs to predict solutions to coding problems and write code. There must be additional training going on with tagged sets. I've heard about people being employed by AI companies to solve programming problems just for the sake of generating training pairs.
No, compsci textbooks and language manuals do that. SO is not the primary, canonical educational resource you seem to think it is, and they'd be the first to agree.
By and large compsi text books are not sources of large amounts of working code in a specific language. Some programming-oriented ones may be; does Numerical Recipes in C count as a comp sci book?
True, I was assuming that people would think a bit more abstractly, or at least a bit more generously, but sometimes I forget where I am. By "compsci" I mean everything from graduate-level theoretical texts all the way down to "101 BASIC Programs for the TRS-80."
In the old days, magazine articles would also present practical code alongside plaintext explanations of how it worked. There's still no shortage of tutorial content, although not as much in paper form, and even less on Stack Overflow.
They may not co-exist in real life, but in a million-dimension latent space you'd be surprised how many shortcuts you can find.
Requires citations not in evidence.
If you didn't bother to read the foundational papers on arxiv or other primary sources, it'd be a waste of time for me to hunt them down for you. Ask your friendly neighborhood LLM.
OMG YES, that site needed to die! I posted a few times on subjects I was an expert in, and hence they were difficult issues, and no one would ever answer them.
The few other times I posted they were questions about things I wasn't an expert in, hence why I was asking, and my god, it was like I was pulling them away from their busy schedules and costing them time at work. It's like you don't have to answer if you have something better to do.
reply