Presenting at the Black Hat USA conference this week, Trend Micro had an interesting comment:
Over the last few years, we’ve noticed a disturbing trend – a decrease in patch quality and a reduction in communications surrounding the patch. This has resulted in enterprises losing their ability to accurately estimate the risk to their systems. It’s also costing them money and resources as bad patches get re-released and thus re-applied.
This comment was made in explanation of their new 0/30/60/90 timeline for patches. But it brings up a very painful point.
Let’s say a vulnerability is announced and a patch is available. Organizations schedule time to implement these patches – and if you have thousands or tens of thousands of systems, that’s no joke. Then after getting all stakeholders agreed and people lined up to do the work (it’s always in the middle of the night), there’s an exhausting push and then some exec gets a report saying that the enterprise is safe.
Now imagine if the next day you got an email saying “actually, that didn’t really fix things, you need to do it all over again”. This is how murders happen.
So TM’s approach is to slow down a little, which is a double-edged sword. If you rush out a fix and later need to patch the patch, at least you’ve hopefully blunted the initial assault. On the other hand, if you want longer before releasing a more polished/tested patch, people’s systems might already be violated.
Read more about TM’s approach on Zero Day Initiative blog.
Related Posts:
Five Times When Updating Your OS Would Have Saved You From Being Hacked
Get Ready to Scan Your Passport If You Want to Buy a VM This Summer
My Server Was Getting Constantly Hacked Until I Changed This One Parameter
No, 'airforce' is Not a Good Password: Check Out This Honeypot
Motherboard MSI Warns of Rogue Firmware
RackNerd and Ezeelogin: Securing and Scaling SSH

Raindog308 is a longtime LowEndTalk community administrator, technical writer, and self-described techno polymath. With deep roots in the *nix world, he has a passion for systems both modern and vintage, ranging from Unix, Perl, Python, and Golang to shell scripting and mainframe-era operating systems like MVS. He’s equally comfortable with relational database systems, having spent years working with Oracle, PostgreSQL, and MySQL.
As an avid user of LowEndBox providers, Raindog runs an empire of LEBs, from tiny boxes for VPNs, to mid-sized instances for application hosting, and heavyweight servers for data storage and complex databases. He brings both technical rigor and real-world experience to every piece he writes.
Beyond the command line, Raindog is a lover of German Shepherds, high-quality knives, target shooting, theology, tabletop RPGs, and hiking in deep, quiet forests.
His goal with every article is to help users, from beginners to seasoned sysadmins, get more value, performance, and enjoyment out of their infrastructure.
You can find him daily in the forums at LowEndTalk under the handle @raindog308.
Leave a Reply