The Perfect Attack
Written November of 2019
DISCLAIMER: This is not your classic IT case about success and technical skills. Instead, we want to tell you a story containing anecdotes and observations from one of the most successful cyber-attacks in most recent history. We have cut out the journalistic middleman and the detached experts. The story is not told by those who were hurt, but by those who cleaned up and fought off the attack. Come with us behind the scenes, experience a war room, and hear how much pain it causes. Welcome to the story we all wish did not take place. The pictures in the story were not taken in connection with the actual company or attack.
The External Threat
Cyber-attacks have become a part of our world and today, all companies and their IT departments are operating in a constant state of fear. In the most recent 4-5 years, the various IT media have placed a focus on stories about all possible forms of cyber-attacks. There has been no shortage of advice and guidance as to how to act. At edgemo, we also did not hold back. However, in the year 2019, we have to recognize that all parts of the business sector, in the East and the West, from smallest to largest and even the gigantic businesses still have deficiencies when it comes to preparedness and means in response to the threats out there. This is a sad fact and one we will return to later in the story. First, we must go fight the Russians and repair thousands of servers, manually.
”I Want My Datacenter Back”
Let’s turn back the clock to May. Or more specifically, to the Saturday before Pentecost, as Torben Christensen specifically delineates it. This might be because the following events turned, what was otherwise a well-deserved and planned public holiday and a while thereafter, completely upside down. “I got a call late in the afternoon. It was their Datacenter Manager who informed me that they had been taken down by ransomware and now he wanted his datacenter back. He sounded utterly exhausted.”
The customer who is a large global conglomerate had not only been taken down locally in Europe, from where he was calling, but across the entire world. More than 400 locations in more than 40 countries affecting more than 45,000 clients – everything. Torben explains: ”When he called, it had been a little more than a week since the attack itself. At first, they had tried a couple of very large foreign consultant companies who were closer to them, but to no avail. After that, they had fought on their own, but now they had given up. Wanting to handle things on your own is understandable, but it WAS not possible. Not when you are faced with what they were. It was an incredibly successful attack.”
Code Red: Special Forces Move In
Torben explains that edgemo has not previously provided consulting services to this customer. “We had sold them a massive amount of hardware, but that was about to change quite substantially,” he says and continues: “I was not told a lot. Merely that I needed to gather a team. So, on a Saturday evening, I am calling around and at about midnight, we had the special forces ready. Early Sunday morning, we had a conference call on Skype. Everything was moving quickly and it was determined that we would send the first team to the location right away. As soon as Monday morning, we sent the second team.”
Infrastructure Specialist, Kåre Overgård recalls: ”When the phone rang, I was at a Phil Collins concert and the first thing I said to Torben was that I did not feel like I knew a lot about ransomware. But Torben was persistent and said; We need to send people in who know a lot. We need to go deep and wide and we need to come at this with a full deck of expertise. We do not know what they need yet.”
”We all had very different backgrounds so it was easy to distribute the tasks among us and they were very impressed by that. We just got to work. We delivered what they needed, quickly. We made the decisions ourselves and we stood by them. That is what is most important to a company in that situation since they do not have time to hand-hold.”
Out of This World
The character, extent, and success of the attack was solidified by what the edgemo people were met with. “What we were brought into was out of this world,” Torben says. “It was complete chaos. It looked like trenches in a war, which it actually was. People had not slept at home for a while. There were people in sleeping bags and it smelled of sweat and stressed-out people. People were zombies, but happy to see us,” Infrastructure Specialist, Alex Sørensen, who was part of the first wave, explains. “Normally, the large domicile is a place for guards, parking attendants, and what not. Now, the parking attendants had been sent home in order for help to get access.”
Alex explains that the domicile’s large cafeteria was closed. “The local Asian grill was making a killing and the office was flooded with carry-out bins. It was extensive. Luckily, the cafeteria opened after some time,” he says and shares a story which illustrates the desperate need for help. “We were on our way to lunch and suddenly, we saw a crew of six men from Portugal making their way across the parking lot with suitcases and all. They were headed to the plane. They were VERY quickly chased down and stopped. They were also there after lunch and for a good while thereafter.”
Serious Business
The responsible Russian “militia” meant serious business. The attack was so successful that it would send chills up any IT Manager’s spine. Being a former IT Manager, this was something Torben could relate to: “The ENTIRE conglomerate was shut down. The backup servers were encrypted to a point where a data restore could not be run and they had no Active Directory,” he says. “No AD was running anywhere in the world and just in the datacenter we were in, more than 7,000 servers needed to be recreated manually. Even though 80% were virtual, there were no backups so everything had to be manually installed.”
Alex elaborates: ”There was not a backup of all servers, but quite a few were backed up. Unfortunately, on many systems, they only backed up the data and not the operating system and the application which meant that everything had to be installed manually. I think many were surprised to find systems on which backup was actively deselected. That was surely an expensive lesson learned.” Kåre adds to this: ”Everything had been turned off so one by one, we needed to turn on the servers in offline state to check if they had been infected and then, they needed to be rebuilt. It was a massive undertaking.” And that was only the datacenter which edgemo was located in. On another continent was a datacenter of similar size. All in all, this involved more than 40 countries in which not one singular PC was operational. More than 45,000 people were out of work across the world. Fortunes flew out the window every minute, for weeks.
Torben has the following to say about the spirits in the company: “Everyone was affected. The work had come to a complete stop. General panic spreads and the support system quickly falls apart. This all equates to success for the hackers. It is virtually impossible to run manual processes – EVERYTHING in this large corporation is run by IT. The only thing they could do was open the gates and receive products. As such, they were able to run a few various processes manually to a very limited extent,” he says. “The immediate result was that a vast amount of employees were sent home for a long time. Many thousands of people and for many weeks. That truly hurts the economy – also for those at home because they were unable to process payroll.”
”When you then stop to realize that they could have minimized the extent quite extensively, it is almost unbearable…”
Brutal Russians
Kåre describes the attack as follows: “As Torben says, the attack was very successful. However, it was not well planned or sophisticated. One does not equate the other. This was not excellent craftsmanship, but rather wild and brutal. Pure brute force. Unfortunately, that kind also has effects." The attack was created by a team of Russians who demanded a ransom. They had targeted the attack to IT employees in a location far away from Europe. In a short amount of time, the foundation, upon which thousands of clients all over the world relied, was completely destroyed. However, the fact that it became as horrific as it did was simply the Russian’s luck. The customer could have limited close to 80-90% of the attack with a few simple measures. We will get back to that when we finish clarifying the extent. We are not even there yet.
”It was clear in the aggressiveness with which they operated, that they were not masters of negotiations. Ordinarily, one would receive a decryption key in exchange for the ransom. Here, the crypto locker used was so aggressive that it actually destroyed the servers. As a result. The customer was essentially equally damaged with or without a key. The Russians even used old update glitches, so this was not an outstanding job at all,” Kåre says. However, what difference does that really make when you see the destruction it has caused?
For a minimum of two months, edgemo had six employees working on the issue for 14-16 hours a day, seven days a week.
State of Emergency – in the War Room with edgemo
Going back to the location and May of 2019. ”We dove into a massive infrastructure which we knew nothing about. From there, we all took on tasks thrown at us and we did not ask any questions,” Torben says, and Alex adds to this: “As edgemo specialists, we are used to being in control of things. We control the progress every day. However, here, we were initially required to act based on their decisions, which was a new experience. Yet, we were given many tasks and we quickly realized that they had nobody coordinating the efforts. Nobody shared anything. Therefore, we started to share things among ourselves and more importantly, we started to document the processes. Nobody at the location had a tool, so we built one. The customer took notice of this and so did the many other external individuals called in to help. They started looking over our shoulders and used our tool and our documentation. That way, they did not have to start from scratch each time and six people did not have to do the same thing. We were able to gain control over a lot of things.”
Torben adds to this: ”It was critical to us and the assisting forces from Portugal, Germany, and other countries, that people speed things up. Nerds truly flourish when they can be a key part of a war room. It is an incredibly inspiring experience.”
When Everything is Critical
Before we can finally let you know how the damage could have been limited, we have one more notation. How does one measure an attack? Pure violence, finesse, or strategy? Our counterpart in this matter clearly did not play Battleships as children, but they probably beat up a bunch of kids on the playground so we will withhold any praise. Regardless of method, in order to understand the enormous success of the attack, one must look at the corporation from the outside. Look at the connections because they outline the complete extent.
What we are dealing with here is a conglomeration which is part of a global mega sector. A global critical key player with especially time-sensitive deliveries in supply chains within this mega sector, which again consists of several inter-connected business and societal sectors. If one player is down for the count, it affects everyone because human lives are on the line. As such, the attack was about more than “just” stressed out employees and a massive dark spot in the annual accounting, it reached further out into the world.
Because of this, it should be written in large invisible letters: SECURITY IS EVERYTHING. Unfortunately, reality was very different from that. This also pointed out the enormous paradox that the cause of the extent of the attack, unfortunately, lay with the customer itself.
- The bullies did not deserve the success they achieved at all.
”An Extreme No-Brainer!”
Even though this is all about a well-run organization in a just as well-run global conglomerate, there were still gaps in the security. According to our three musketeers, this was an issue of classic pitfalls which many, far too many, fall into. This is also the reason why this story is being told.
Torben gets right to the point: “Let me start by making this clear; One checkmark could have avoided 80-90% of the extent of this attack,” he says. “One has to ask oneself: Did we activate LAPS (Local Administrator Protection Solution)? With that, all local administrator passwords are randomized, and it really just requires one checkmark to activate. This is an extreme no-brainer. It is such a no-brainer that we actually do not even ask people if they have LAPS now a days – it is the equivalent of asking if you even have passwords.” Torben adds that if you have the same administrator passwords on all machines in your organization, it is the equivalent of not even having any passwords at all. This was the case in this instance.
Kåre chimes in: ”They actually had some expensive and impressive security measures in the upper levels in the system, however, since the basic part was lacking, it really did not make a difference. Even though they were in place, this ransomware just flew from the bottom up through it all and lit fires along the way. Basics are key when designing your security setup,” Kåre says and explains that they assisted an employee who had not changed passwords in 12 years.
”This attack began on the inside, but I have to add that the built-in Windows Firewall and UAC were also disabled. Despite it being a bit troublesome to work with, it is there for a reason.”
MAKE the Checkmark Already
According to Alex, the threat in this case was not new. “They had been lurking for months,” he says and by that reiterates that much of this could have been avoided if the Microsoft update patch had been put in place when it was released months before. “The choice was made to ignore it and that lead to a grotesque, inhuman amount of work. Even the system, they actually had created to recreate the servers, had been hit. As such, everything had to be done manually.”
To this day, nobody knows if there is any remnant of code left in the organization, but should that be the case, it can never get out of hand again. They are ready if the worst should happen. According to Torben: “Yet, the message has to be that some obvious things are the result of pure laziness within the companies. If it is not LAPS, it is probably something else. For instance, management in mobile units can have similar consequences as occurred in this case. So, MAKE the checkmark already and get your updates under control.”
The Scandinavian Rescue
But then what? Did they make it? Naturally, the story ends with the attack being fought off, but it did take a couple of months.
The happy ending came about in that the previously mentioned very intelligent Datacenter Manager played a very key role. He happened to have a domain controller located somewhere in Scandinavia, completely offline. “From here, they could pull their entire AD schedule. Step one was to recreate their AD. They managed to do so and from there, it was just hard work,” Torben says. At one point, they found a remnant of the code in the system which affected approximately 2000 re-created servers and that sent all those back to square one. It was truly a race against time. Ultimately: If we had not had this Datacenter Manager, we would instead have spent our time recreating the company’s IT from the ground up,” he says.
Take Control of your Processes – and Your Backup
Organizations, big and small, need preparedness to include procedures for both daily operations and crisis situations – as Alex says it. These days, you need to be ready to act. ”They were not at all ready to rebuild things to this extent. To their credit, they actually had a “factory” in which to build the servers. But when they suddenly needed to build such a large amount, it could not meet the requirement. It had a bunch of manual steps which is fine when you are building a few per week, but in this instance, we needed to build a couple of thousands per week”.
Kåre adds to this: ”When it comes to backup, it is also very important to have your policies and processes in place. The way they previously ran backups was taking a snapshot once every twenty-four hours. When you are subject to an attack like this, you have to look at what is worthwhile. The hours just kept flying out the window,” he says. They had started with rolling back to an image which was a week old. My guess is that they could have saved thousands of consultant hours by going back further, for instance a mere eight days when the machines were not infected. Then, they could have built the servers and machines in bulk and saved days of work and millions of Danish kroner.”
You Need to Learn from Your Mistakes
Kåre talks about the time after the attack. “When the storm had calmed, we visited the customer again to help run through new passwords. Even that process was truly chaotic. They made very rash decisions. They told 45,000 users to change passwords giving them two days’ notice. It is not necessary to stress that it was chaos and they also did not forewarn us. In fact, they repeated this successful method later on in connection with a large update. Yet this just proves how difficult it is to break habits and systems. And this company is far from alone. Unfortunately, we see this far too often that people think that all is in order when they buy some tools and systems, but the systems cannot handle it all on their own.”
Alex adds to this: ”It is the be all, end all, to create a preparedness to act. You cannot be caught off guard. The systems for backup and updates have to be in place, but preparedness and being ready is just as important. If not, an ordinary restore task which would normally take 40-45 minutes ends up taking 6-7 hours as was the case here. You have to be ready to recreate EVERYTHING at once. Quickly and with agility.”
Thus ends the tale from the backside of one of the large attacks. We have told you what we are permitted to and what we can. The customer knows we are sharing the story. However, the anonymization plays a key role when this is about all of us getting smarter. The examples could be from anywhere. The pitfalls are well-known and the situations represent what could happen. Everyone knows the feeling of being sent back to square one, or even further back than that.
Final Note
In September of 2019, the company was attacked once again. The attack made it through the walls, but was knocked back hard and effectively. The organization remained operational during the entire process.
This time, the customer was successful.
We are guessing this boosted their spirits and morale far into the future. Once they get themselves recuperated of course.
Scroll to the top
Make a Connection