Rideshare To Jail – Do Not Pass Go
Fallout From Uber CISO Joe Sullivan’s Conviction
Richard Nixon’s presidency ended because of eighteen and a half minutes of missing audio – not because he was conclusively tied to the Watergate break-in. Al Capone was finally jailed for tax evasion, not any of the stuff he did to get the money in the first place. So too, it seems, Joe Sullivan has been convicted not because Uber got breached under his watch, but because of how he covered it up.
For anybody unfamiliar with the details, here’s a quick summary. In 2016 Uber was being investigated by the FTC for a 2014 data breach. Reportedly Mr. Sullivan was responsible for trying to hide a new breach from the FTC – one he had basically just told the FTC couldn’t happen because he had fixed the previous issues that allowed the 2014 breach to happen – by claiming it was a group of ethical bug-bounty hackers who found a vulnerability and negotiated a payment under Uber’s bug-bounty program. The facts were that these were malicious hackers, and had Uber not paid them they would have released the data they had collected – including PII and PCI data.
Before I go further on the main topic, I’ve got a sidebar. Bug bounty programs, and the people who work hard to make a “gig economy” living through them, provide an ethical and invaluable service to software developers (and software users) everywhere. This is an industry that started as a pariah, was openly bashed by names both big and small in tech, was begrudgingly accepted, and is now a respected part of the cybersecurity ecosystem. However, stunts like Sullivan’s, saying that a malicious attack was part of a bug bounty program, erode the trust in this part of our industry, and it simply doesn’t deserve that.
We may never know if Sullivan thought he was being a good corporate citizen or thought he was saving his own a$$ when he did this. We may never know if he made this decision himself, or if he feared for both his job and his career when he decided this was the thing to do. We may never know if Sullivan was pushed into this decision by his board or his C-level peers. But it isn’t hard to believe that any of those things could have happened.
The pressure CISOs are under today is, in a word, brutal. As one of the newest additions to corporate C-level positions – and sometimes not even considered a peer to the other C-level resources – the CISO has the unenviable job of securing things that are well outside of their control, against external enemies, internal enemies, unpatched vulnerabilities, and mistakes made by anybody configuring technology for their organization. They have to do this while often being considered an impediment to the business doing its thing – making money hand over fist. They can’t hire good people, because there aren’t enough of them. They have tech-debt both within the IT infrastructure they’re supposed to be protecting, and in security tools that, while state-of-the-art not 4 years ago, are sadly outpaced by the reality of the cybersecurity perils that exist today.
And they’re pressured to minimize breaches. The first rule of
fight breach club is we don’t talk about fight breach club. We have plenty of examples of ransomware attacks being described as minor network outages. Nearly two-thirds of companies hit with ransomware have admitted to hiding that information, which is part of the reason why multiple governments, as well as multiple government agencies here in the US, are now demanding mandatory reporting of cybersecurity breaches.
It is surprisingly common for DFIR engagements to involve the DFIR provider officially reporting to outside counsel. This is an attempt to make sensitive data found by the DFIR provider to be covered as legal work product, making it harder to subject the findings to the discovery process if any lawsuits or legal proceedings resulted from the breach being investigated. I’m even aware of some companies doing this with penetration testing engagements. Ask your friendly neighborhood PCI QSA or anyone who provides security risk assessments about the pressure to revise findings and soften language to blunt negative findings. Companies don’t want evidence of their security deficiencies, and this attitude likely sets the stage for the events that led to Mr. Sullivan’s conviction.
Of course no company wants to have to disclose that they were breached. They’re worried their stock will tumble (it often does in the short term). They’re worried they’ll lose customers (they may). And they’re worried their competitors will get a leg up (they will, until their competitors get breached, too). Frankly nobody wants the FTC or <insert your least favorite governmental entity here> all up in their business because of a breach. All of that takes away from the business’ goal of making money hand over fist.
So I can understand what pressures could have led to this outcome. It may have seemed like the best decision at the time, threading a needle between disasters. But what occurred here is extremely disturbing and alarming.
The reality is, breaches are going to happen. In cybersecurity circles, this is well understood and we can continually improve our defenses. We have a duty to adjust our programs to balance protection with detection, response, and recovery to make sure the impacts of these breaches are minimal.
I believe we have to shift our mindsets from “we don’t talk about breaches,” to valuing risk management, quick response, and earnest transparency about breaches when they occur. Let’s celebrate the JJ Keller/Amerigas breach (1 customer record lost because the breach was stopped in eight seconds) and others as examples of doing these things well. Sure, a security program needs to meet some minimum capabilities and needs to meet a standard of due-care, but CISOs have to have room for their programs to not be perfect, for the very reason that a perfect security program is not achievable. The security industry knows this, we’ve all known it for a long time. It’s time the board, the C-suite, the investors, and the customers came to terms with it too. Let’s never give another CISO a reason to try to cover up a breach.
Deepwatch was designed to partner with customers on responding to issues as quickly as they arise – certainly to minimize breaches but also to detect, respond, and recover when something does slip past an organization’s defenses. CISOs need not endure the pressures alone – find out how Deepwatch can be your partner in staying ahead of rising threats.