The security team of a client had recently happened upon an application that one area of the company had been using for a few years, and felt that it warranted testing. This should have been run-of-the-mill.
The application provided access to key data, over the internet, enabling what had previously been a complex paper-based process to be slimmed down to a few mouse clicks.
It was a client/server system, so we started off by devising a few different attack approaches: focusing on performing a man-in-the-middle attack to break open the encryption, discovering the protocols, then exploiting this information to attack the server where the critical data was held.
Just a few minutes into the test it became apparent that things weren't as they should be, from a security point of view. The protocol didn't really exist – the application consisted of a simple client driving some HTTP GET requests; Internet Explorer was not being used because the particular format of the data files is not handled by it in-line.
Once the client was bypassed, and the GET requests sent directly to the server, all authentication could be bypassed, enabling the data to be retrieved by anyone, from anywhere. The system was also a shared one.
After a few moments of digging around, other organisations using the system were identified, and after that, access to their data was possible.
Analysis of the infrastructure supporting the application showed that it was very simplistic, shared between numerous customers of the ISP, so not providing any specific measures such as a firewall policy for that application, or other countermeasures such as an Intrusion Prevention System. Of course, our “attacks” on the system weren't noticed – despite the critical nature of the data being handled, there was little evidence of any operational security.
It was clear that the system was not safe to use – and had never been. Our client's data was pulled from the system, which, while removing the potential for data leakage, meant that they could no longer use the system. Although this was inconvenient, the real question was who else, in the last three years, had been dipping into this rather interesting source of data.
From what was expected to be a simple test, lead to a full scale incident response exercise, ranging from forensic investigation, through to notification of various parties that might have been affected by any data loss. Expensive in both money and other resources, and potentially having an adverse impact on the company's reputation, at least among their clients and partners.
So what went wrong? This is an organisation that takes information security seriously. Yet this had been overlooked – being missed from the normal risk assessments and operational security processes such as regular vulnerability testing.
The problem was that this was a fully outsourced system. Data was being passed to a third party, which, to complicate things, then passed that data in turn to another third party – the one that had built and was running the system. Because the data was no longer on the network, and not using any of the infrastructure, it was simply overlooked.
This example is something that I am seeing more frequently: while an organisation might have some great security systems and practices, all too often the focus is on “their” systems: everything from the firewall back to the data centre. It is all to easy to focus on those visible bits, while overlooking the security aspects of systems acceptance.
Don't think about security as something that only applies to your core systems, infrastructure and websites.
Really, security isn't about all those obscure and fascinating topics that we as security professionals like to spend most of our time thinking about. It's about the basics: Know where your data is, think about the risks to it, especially who has it, and what are they doing with it. Then apply an appropriate level of security.
See original article on scmagazineus.com