If capitalism insists on those higher up getting exorbitantly more money than those doing the work, then we have to hold them to the other thing they claim they believe in: that those higher up also deserve all the blame.
It’s a novel concept, I know. Leave the Nobels by the doormat, please.
Wait, are you trying to say that Risk/Reward is an actual thing?
/s (kinda)
I doesn’t seem unfair for executives to earn the vast rewards they take from their business by also taking on total responsibility for that business.
Moreover, that’s the argument you hear when talking about their compensation. “But think of the responsibility and risk they take!”
Was there a process in place to prevent the deployment that caused this?
No: blame the higher up
Yes: blame the dev that didn’t follow process
Of course there are other intricacies, like if they did follow a process and perform testing, and this still occurred, but in general…
If they didn’t follow a procedure, it is still a culture/management issue that should follow the distribution of wealth 1:1 in the company.
"George Kurtz, the CEO of CrowdStrike, used to be a CTO at McAfee, back in 2010 when McAfee had a similar global outage. "
Wonder if he partied with John?
John left McAfee 15 years earlier
I do wonder how frequent it is that an individual developer will raise an important issue and be told by management it’s not an issue.
I know of at least one time when that’s happened to me. And other times where it’s just common knowledge that the central bureaucracy is so viscous that there’s no chance of getting such-and-such important thing addressed within the next 15 years is unlikely. And so no one even bothers to raise the issue.
Reminds me of Microsoft’s response when one of their employees kept trying to get them to fix the vulnerability that ultimately led to the Solar Winds hack.
https://www.propublica.org/article/microsoft-solarwinds-golden-saml-data-breach-russian-hackers
And the guy now works for CrowdStrike. That’s ironic.
I’m imagining him going on to do the same thing there and just going “why am I the John McClain of cybersecurity? How can this happen AGAIN???”
His next job might look at his job history and suddenly decide that the position is no longer available.
If you don’t test an update before you push it out, you fucked up. Simple as that. The person or persons who decided to send that update out untested, absolutely fucked up. They not only pushed it out untested, they didn’t even roll it out in offset times from one region to the next or anything. They just went full ham. Absolutely an idiot move.
The bigger issue is the utterly deranged way in which they push definitions out. They’ve figured out a way to change kernel drivers without actually putting it through any kind of Microsoft testing process. Utterly absurd way of doing it. I understand why they’re doing it that way but the better solution would have been to come up with an actual proper solution with Microsoft, rather than this work around that seems rather like a hack.
This is the biggest issue. Devs will make mistakes while coding. It’s the job of the tester to catch them. I’m sure some mid-level manager said “let’s increase the deployment speed by self-signing our drivers” and forced a poor schmuck to do this. They skipped internal testing and bypassed Microsoft testing.
That mid-level manager also has the conflicting responsibility to ensure the necessary process is happening to reliably release, and that a release can’t happen unless that process happened. They goofed, as did their manager who prized cheapness and quickness over quality
Let them all from the CEO down suffer the consequences that a free market supposedly deals
We still don’t know exactly what happened, but we do know that some part of their process failed catastrophically and their customers should all be ready to dump them.
I’m quite happy to dump them right now. I still don’t really understand why we need their product there are other solutions that seem to work better and don’t kill the entire OS if they have a problem.
The kernel driver devs also fucked up. Their driver could not support reading a file containing zeros.
If I’m responsible for the outcome of the business, I want a fair share of the profits of the business.
Wild theory: could it have been malicious compliance? Maybe the dev got a written notice to do it that way from some incompetent manager.
While that’s always possible, it’s much more likely that pressures to release quickly and cheaply made someone take a shortcut. It likely happens all the time with no consequences so is “expected” in the name of efficiency, but this time the truck ran over grandma.
We have rules against that at my job…literally if God came down and wrote something out of process that’ll be a no big guy.
I get that it’s not the point of the article or really an argument being made but this annoys me:
We could blame United or Delta that decided to run EDR software on a machine that was supposed to display flight details at a check-in counter. Sure, it makes sense to run EDR on a mission-critical machine, but on a dumb display of information?
I mean yea that’s like running EDR on your HVAC controllers. Oh no, what’s a hacker going to do, turn off the AC? Try asking Target about that one.
You’ve got displays showing live data and I haven’t seen an army of staff running USB drives to every TV when a flight gets delayed. Those displays have at least some connection into your network, and an unlocked door doesn’t care who it lets in. Sure you can firewall off those machines to only what they need, unless your firewall has a 0-day that lets them bypass it, or the system they pull data from does. Or maybe they just hijack all the displays to show porn for a laugh, or falsified gate and time info to cause chaos for the staff.
Security works in layers because, as clearly shown in this incident, individual systems and people are fallible. “It’s not like I need to secure this” is the attitude that leads to things like our joke of an IoT ecosystem. And to why things like CrowdStrike are even made in the first place.
As a counterpoint to this articles counterpoint, yes, engineers should still be held responsible, as well as management and the systems that support negligent engineering decisions.
When they bring up structural engineers and anesthesiologists getting “blame” for a failure, when catastrophic failures occur, it’s never blaming a single person but investigating the root cause of failures. Software engineers should be held to standards and the managers above them pressuring unsafe and rapid changes should also be held responsible.
Education for engineers include classes like ethics and at least at my school, graduating engineers take oaths to uphold integrity, standards, and obligations to humanity. For a long time, software engineering has been used for integral human and societal tools and systems, if a fuck up costs human lives, then the entire field needs to be reevaluated and held to that standard and responsibility.
This is why every JR Engineer I’ve mentored is handed a copy of Sysadmin Code Ethics day one along with a copy of Practice of System and Network Administration.
We really need a more formal process for having the title of engineer and we really need a guild. LOPSA/USENIX and CWA are from what I can tell the closest to having anything. Because eventually some congress person is going to get visited by the good idea fairy and try to come down on our profession. So it’s up to us to get our house in order before they do.
CTOs that outsourced to a software they couldn’t and didn’t auidit are to blame first. Not having a testing pipeline for updates is to blame. Windows having a verification system loophole is too blame. Crowd strike not testing this patch are too blame. Them building a system to circumvent inspect by MS is their fault.
Now with each org there is probably some distribution of blame too, but the execs in charge are first and for most in charge…
Honestly this is probably enough serious damages in some cases that I suspect ever org to have pay some liability for the harms their negligence caused. If our system is just that is, and if it is not than we have a duty to correct that as well
I’ve said it before and I’ll say it again.
Corporate culture is a malicious bad actor.
Corporate culture, from management books to magazine ads to magic quadrants is all about profits over people, short term over stability, and massaging statistics over building a trustworthy reputation.
All of it is fully orchestrated from the top down to make the richest folks richer right now at the expense of everything else. All of it. From open floor plans to unlimited PTO to perverting every decent plan whether it be agile or ITIL or whatever, every idea it lays its hands on turns into a shell of itself with only one goal.
Until we fix that problem, the enshittification, the golden parachutes, and the passing around of horrible execs who prove time and time again they should not be in charge of anything will continue as part of the game where we sacrifice human beings on the Altar of Record Quarterly Profits.
Blame the dev who pressed “Deploy” without vetifying the config file wasn’t full of 0’s or testing it in Sandbox first.
If the company makes it possible for an individual developer to do this, it’s the company’s fault.
Exactly. All of our code requires two reviews (one from a lead if it’s to a shared environment), and deploying to production also requires approval of 3 people:
- project manager
- product owner
- quality assurance
And it gets jointly verified immediately after deploy by QA and customer support/product owner. If we want an exception to our deploy rules (low QA pass rate, deploy within business hours, someone important is on leave, etc), we need the director to sign off.
We have <100 people total on the development org, probably closer to 50. We’re a relatively large company, but a relatively small tech team within a non-tech company (we manufacture stuff, and the SW is to support customers w/ our stuff).
I can’t imagine we’re too far outside the norms as far as big org deployments work. So that means that several people saw this change and decided it was fine. Or at least that’s what should happen with a multi-billion dollar company (much larger than ours).
Are “product” (PM, PO) and “engineering” (people who write the code) one and the same where you work? Or are they separate factions?
No, separate groups. We basically have four separate, less-technical groups that are all involved in some way with the process of releasing stuff, and they all have their own motivations and whatnot:
- PM - evaluated on consistency of releases, and keeping costs in line with expectations
- PO - evaluated on delivering features customers want, and engagement with those features
- QA - evaluated on bugs in production vs caught before release
- support - evaluated on time to resolve customer complaints
- devs - evaluated on reliability of estimates and consistency of work
PM, PO, and QA are involved in feature releases, PM, QA, and support are involved in hotfixes. Each tests in a staging environment before signing off, and tests again just after deploy.
It seems to work pretty well, and as a lead dev, I only need to interact with those groups at release and planning time. If I do my job properly, they’re all happy and releases are smooth (and they usually are). Each group has caught important issues, so I don’t think the redundancy is waste. The only overlap we have is our support lead has started contributing code changes (they cross-trained to FE dev), so they have another support member fill in when there’s a conflict of interest.
My industry has a pretty high cost for bad releases, since a high severity bug could cost customers millions per day, kind of like CrowdStrike, so I must assume they have a similar process for releases.
You can blame his leadership who did not authorise the additional time and cost for sandbox testing.