Five out of ten new techniques are langsec, which makes them inherently difficult to fix, yet we keep using unreasonably complex languages for protocols and keep stapling on more complexity, resulting in formally assured insecurity.
http://langsec.org/ does a spectacularly poor job of introducing langsec to the uninitiated. It appears to be a list of conferences and papers for academics, followed by http://langsec.org/bof-handout.pdf which makes unsubstantiated assertions and doesn't elaborate. I think more people would learn about langsec if the homepage contained an introduction followed by a guided tour of articles which incrementally teach the current state of the field in an organized accessible fashion.
EDIT: I found https://scribe.rip/1b92451d4764 which purports to be an "introduction followed by a tour", which links to “Security Applications of Formal Language Theory” and “The Seven Turrets of Babel: A Taxonomy of LangSec Errors and How to Expunge Them”. The second seems not very practical/applied or hands-on, and the first is quite long and academic (I haven't read it yet). It might be useful as reference material, but I'd be interested to see examples of designing/refactoring systems to be more secure based on langsec.
I was full time infosec from 1998 until 2015 then moved into an adjacent role that is still technically infosec but is more infrastructure/platform controls. This is the first time i recall ever seeing the term.
Based on reading the two sentence synopsis in Google results it’s largely indistinguishable from the more familiar “formal methods” or “formal verification”.
The paper linked in your EDIT is awesome. I'm an AppSec engineer and I had never encountered a term like "shotgun parser". What the authors describe as shotgun parsing is exactly what I've seen from reviewing validation logic across hundreds of enterprise applications. It's nice to have a name for the pattern.
The worst part of shotgun parsing and loosely defined input structure is the difficulty of remediation. I constantly receive pushback from dev teams when I ask them to use regex-based validation per field. What sounds like a simple task actually becomes extremely difficult because lots of apps populate datasets via convoluted monolithic endpoints. Dev teams would have to change the way in which shared services structure and output information. Those shared services are frequently maintained by other teams and any other application which consume the same data would also need to be modified.
In the end, it becomes a compromise where the ad-hoc parsing is tightened/modified to be "good enough". This bubblegum/duct-tape fix only further cements the ad-hoc parsing throughout the org.