Learning about a network from afar, whether actively or passively is always one of the first things you do when deciding to penetrate a computer system. There are a variety of tools we can use to help us along in this process, some of which I will cover here. While some of this seems like common sense, that means it is often overlooked, which can mean the difference between getting in, and calling it bust.
What are we looking for? OSINT. Anything that gleans us insider information about the network. That means, usernames, passwords (where be in plain text, or hashed), databases that we can download, information about the network typology such as how many machines, what type of networking hardware, what OSs on each machine, the versions of the software these machines are running, any information on what subnet(s) the machines sit on, and how many networks are at play, if there is a VPN in the picture, if there is an intrusion prevention/detection system, firewall, or WAF inline, (and what they may let slip through), etc. We also want to collect any information on employees, users, and administrators we can, such as their names, addresses, phone numbers, etc. We can collect this information in many ways.
When feeling out a network, one things you’re going to need to do, is see how far reaching the network umbrella is, and an easy way to do that is to enumerate all different primary domains you’ll be attacking (check scope!), as well as their respective subdomains.
The primary domains should either be listed in scope, or can be found via a google search or two, but the subdomains sometimes aren’t quite as public. Hence, I usually use a subdomain enumeration tool… a personal favorite for this is Sublist3r.
I recommend doing this before firing off a long running (slow, so it’s accuracy is best) nmap scan, because then you can add each address the sub domain enumerator finds to the list of hosts for nmap to scan!
I would run: python3 sublist3r.py -d example.com -t 2 -o example.com.subdomains.
First I usually start with port scanning, because the results of doing this will tell me where to go next. To do this, I would recommend using Nmap, a tool with a huge number of options and configurations for mapping out a network’s weak points, by finding open ports, enumerating services and their independent software versions via their characteristics, operating system versions, and scripts are used to enumerate things like samba shares, and web server configuration.
While I could do an entire write-up on just nmap alone, I’m going to stick to the basics here. Here is the help output, and as always, you can find more detailed information about nmap and most other commands by using the man command, and in the documentation.
As you can see, there is a plethora of options for different scan types, types of networks, output formats, timing, os and service detection, even evasion techniques. My favorite command I like to run, just to get started is:
This of course can and should be adapted to your specific use case. For example, if you know that a machine is running a UDP service, use -sU in place of or in addition to -sS.
Around this time I start doing what you may have heard of as Google dorking, where you use google or another search engine that supports extended attributes in searches to find sensitive files, such as logs, programs, backups, domains, and vulnerable code. I would normally do site:oxasploits.com at the beginning of the google dork, and then apply the actual dork after that, so that we only get listings related to the domain in our scope. So for example: site:oxasploits.com filetype:log will search for file with the extension .log under the oxasploits domain. An extensive searchable Google dorking database can be found at exploit-db. I would recommend looking though the database to get a feel for how Google dorks are assembled. Then, if you are feeling clever, or need to find something specific for whatever reason, you can look through the following list of advanced search operators that may help you build a query.
Words in a specific order
Results will be related to A or B
Same as above
Results will be related to both A and B
Results do not mention this
Wildcard for a phrase
Search for a definition
Search for the most recent cache
File ends in this extension
Same as filetype
Website which results will come up for
Websites related to a domain
Search for a document that has this in title
Search for a document that has these multiple words in title
Words are in the URL
Word string is in the URL
Weather at a location
Information about a ticker symbol
Search google maps for this location
Search info about a movie
Search from a specific google news source
Results before this date
Results after this date
Search from a specific google news source
Search within a number range
Search for pages backlinks containing this anchor text
Search for pages backlinks containing all these words in anchor text
Search for pages with two words within X words of each other
Search results from a specific location
Find news from this location
Search results from within this date range
We can employ vulnerability scanners to check which services we might successfully attack in a later phase of the hack. There are tons of these, but some that I like are Nikto, which is a CGI scanner that you can enumerate HTTP servers running server-side website scripts (PHP, Perl…) that could have security bugs in their software versions. Nikto will scan for footholds that allow you to leverage RCE via LFI, RFI, and perl open read bugs, and more.
Also a useful remote vulnerability scanner I use frequently is WPScan, which is geared towards finding vulnerable Wordpress installations. Unfortunately for this tool, you will need to go to WPScan’s website and generate an API key to use the tool.
An example command would look something like: nikto -Cgidirs all -Format txt -host www.example.com -mutate 3 -output www.example.com.nikto -port 443 -ssl -Tuning x --rua --api-token [token goes here], and as always, feel free to experiment and change these options at your discretion… see what works for this specific server!
So a Wordpress website could be scanned for attack vectors like: wpscan --url https://blog.example.com/ -v -o blog.example.com.wpscan -t 4 --api-token -e ap,at,cb,dbe,u --plugins-detection mixed. Of course feel free to include anything else you already know about the server, such as usernames, or if you know there is a WAF involved, I recommend dropping mixed from the last option and adding the --stealthy option.
Screenshots can host gobs of information about a target, otherwise known as OSINT. You can find anything from locations by googling surroundings, names from looking at social media profiles in the shot, the length of a password by counting stars, or even if the screenshot is of something technical, information on what services are running, or the network typology… I am guilty of this as well, and frequently snap screenshots to show my hacker buddies, and often neglect to black out, or otherwise censor the sensitive information. Behold, one of my screenshots (try to resist rooting me, please)!
From this screenshot, try to pick out every piece of information about my network that you can, before checking below. You’ll be amazed once you start noticing things.
Running an Xorg server
Window manager is Fluxbox
The time and date the shot was taken
My username on the machine and my hostname
My kernel version
OS/Distribution is Kali Linux
My user's uid, my primary group is 'users', and I am in the 'sudo' group
Port 9000 is open on another machine on my network
I'm running Graylog on another host, which suggests I am probably also running Opensearch or ElasticSearch, and Mongodb
The Graylog instance is not encrypted
I have bookmarks that indicate I have a job, may have a media server on the network, and embarrassingly, enjoy porn
I have an active Google account
A picture of me
I'm connected to a wireless network
My computer is a laptop because you can see the battery icon
I'm torrenting something
My browser is Google Chrome
I like to watch video in pip while I'm working (icon for the pip shortcut plugin)
I use the command line to download things frequently (curl/wget plugin)
I use awesome screenshot to take pictures of only what is in the browser window
My Google Chrome version/User Agent
My Graylog NodeID
That it only took 0.01 seconds for Graylog to respond after searching lots of data over 7 days, fast server
I run OpenSSH servers on my network, and the low number of hits to it suggests firewalling
I have a user on one server on the network called 'webmaster'
I run an Apache2 server with SSL
I run ntopng for network analysis
The 10.0.2.0/24 network is protected by an IPS
My torrent client has DHT enabled
My firewall is netfilter/iptables
Four hostnames on my network are, likon.dev.oxasploits.com, zerkon.dev.oxasploits.com, and oxasploits.com, and vpn.oxasploits.com
Part of my monitoring suite uses Prometheus
A program recently crashed and dumped core
My screen resolution
It’s a pretty lengthy process to do network reconnaissance, and the larger the network, the longer it takes to do a thorough job. There a couple things I left out of this tutorial for brevity, and I will list them here just so you can keep them in mind: You can use curl to grab http/https headers and learn plenty about a web server simply by sifting through those details. You can see only that with curl --head. Once you have enumerated some users on a system or two, you should probably also use a tool such as Dirbuster to map out which directories on the http server are exposed to the internet serving pages users have put up. These user designed pages are not normally the most secure of the bunch. If you are stalling while gathering information about users, try using an OSINT tool such as Maltego, it can help tie various users to their respective company positions and find their phone numbers, full names, date of birth, addresses, even social security numbers, which can be extremely useful in a later stage of the hack… password cracking.