One absolutely diabolical mechanism that was used (at least 5 years ago when this scourge of ransomware started to rear its ugly head) goes something like this:
1. Gain access to change the code on the front-end web servers (usually PHP)
2. Change the database access layer to transparently encrypt data being written to the database, and decrypt data being read from the database. The key would be loaded into memory by curl'ing an attacker-controlled website at startup.
3. Wait 30 days
4. Notify the company that they're compromised, turn off the attacker-controlled key service, and restart the web front end
Now step (3) ensures that most data in the database has been re-written, and if your backups are dumps of the production database, you now have a month of encrypted backups that you can't read... If you're lucky, you may have a month-old backup to restore from; if you're unlucky, you rotate every 30 days.
If there's an engineer who can pull this off why are they screwing around with ransomware? Have them send me their resume, if they can deploy code to our frontend transparently without customers or devops having to spend time or even noticing it we'll happily pay them more than they're asking for in ransom.
Edit : thinking about it, this story doesn’t add up. Doesn’t this mean the client must have a copy of the decryption key, meaning any cached client would render the ransom demand worthless? It’s just also so much easier, if someone has that level of access, to make a copy of the database, encrypt it, then blow away the old one. Doing a silent deploy of client code with no one noticing seems way harder.
Honestly, many of them are in nations where they are working for a state-sponsored cybercrime agency and have few other options, if any. North Korea and Iranian gov'ts are major purveyors of ransomware; I've known many clients who were breached by them (FBI guys said it was usually one of them each time). If not, it's probably Russia or China, especially if it's a higher-value target. If it's just a private citizen flinging exploits at ip ranges, it's possibly Brazil too.
Honestly, it used to be Eastern Europe, until we started realizing they've got some serious programming talent and contracted some work out. Now it's less bad, though still a bit sketchy. Of course, this doesn't help nation-state attacks.
> if they can deploy code to our frontend transparently without customers or devops having to spend time or even noticing it we'll happily pay them more than they're asking for in ransom.
This is one of those stories that goes around in certain communities. I don't know if it's true or not; it may well be. I've heard of it several times, but only ever second- or third-hand. I've also heart it's happened to more than one company. The thing is, you can't really verify this sort of stuff: if it's true, whoever developed it would throw its full weight behind it and hit as many targets as possible. It would probably be under tight investigation by some three-letter. Companies also probably wouldn't want to disclose this. I don't know if It's not unreasonable to hypothesize N. Korea or Iran put together a kit, though they'd probably relegate it to a certain stack (too much technical debt otherwise).
Of course, it's also possible it's not, and I wonder if that's more likely. Every industry has such stories; maybe this is the white whale of forensics.
Back in the day when memes were in the form of urban legends and dispensing with them required various paperback books, these were called FOAF stories: always "a friend of a friend."
Implementing zero downtime transparent encryption on any moderately complex codebase above the database layer sounds like a big engineering challenge. Even moreso if you're trying to do it without the sysadmin noticing. Even moreso if there is no persistence for the encryption key so every user request ends up having to fire a request to the attackers server - that'll kill performance. I doubt many attackers would attempt it.
If I’ve learned anything in decades of working in infosec, it’s to never underestimate the ingenuity of a determined attacker.
Read through any modern multi stage remote exploit. The amount of engineering required to pull these stunts off - reliably no less - is truly staggering, and yet attackers continue to innovate and develop new techniques and even more complex exploits.
And you overestimate the competency of the defenders: sysadmin? What small business has a dedicated sysadmin? It’s not like we are talking about running a global hosted chat service like slack here. (Oh wait...)
BTW Persistence for the key in memory is trivial: create a shared memory segment and store the key there, with a ttl.
Wait, how does this work? Next time new version will be deployed, the deploy script will override all the files, and people will notice that all the data is coming back damaged.
Was it a site in maintenance mode, where no one was working on anymore?
Not only that, the company must not have a single front end dev open up dev tools for 30 days (which should happen at least once a month if anybody is debugging or doing any frontend dev whatsoever), and notice that the usual json payload from the server is encrypted and can't show up in devtools, plus there's this weird outgoing request to a domain they don't recognize? And absolutely no devops or logging notices an request to some weird domain (which must be handling tons of requests and have great uptime and scalability, since every single client has to connect to it to decrypt the database).
Oh, and there must not be a DBA at the company either, since absolutely nobody ran a SQL or analytics query in an entire month and noticed that some of the rows were full of unreadable garbage..
This story must be about someone's single server, single dev LAMP blog from ten years ago if it ever actually happened.
The encryption would happen on the server, not the client. Besides, what json? The server just returns rendered html as the lord intended :)
Yes, we are talking about the long tail of little LAMP servers handling small business needs with some custom code written by a contracting shop. What’s a devops? DBA? Once the site is developed and working, there’s little reason to pay to change it.
It's nice to have a restore step in there too, so one has both a validation that the backup is usable, and gives a "playground" where one can have a safe day old environment for testing/training/whatnot against production data.
That is essential. If you do not regularly test restoring, you do not know if you have backups. I normally overstate this just a little for effect, "If you don't test your backups you don't have backups", but that isn't precisely true.
And yes, it frequently makes sense to base test/fixture data and whatnot on backup data flows, but unless you don't deal with any PII, you're hopefully not using that data without a sanitization step.
To anyone viewing my answer as a guide: a backup by itself is not a continuity plan. There are other important areas such as setting recovery metrics (RPO/RTO) and testing restores against those metrics. The world is rife with stories of unreadable backups and/or restorations that took days instead of hours.
Also, backups must include a physical process of moving a copy of backups to an airgapped secondary system (a human ejecting disks/CDs/tapes and carrying them to another storage container), so that it's impossible for an attacker to compromise the backups via the same software exploit that corrupts the primary data.
By good and easy to do point- something as simple as size will help but a diff between backups would tell much - i had not thought of that - then again it comes for free in something like tarsnap
Production has no access to backup.
Backup has read only access to production.
Backup writes are append and not overwrites.
Deletes/archival are governed by a retention process.