Use Crt.Sh To Identify Domains And Sub-Domains That Belong To An Organization

Using CRT.sh to gather information about the organization.

Background:

Every Certificate Transparency log is a record of all publicly trusted digital certificates.

The main goal of Certificate Transparency is to provide a publicly available system of logs, where any domain owner can verify whether a certificate was issued by a trusted CA or issued maliciously, and to prevent users from being tricked by any fraudulent certificates.

Crt.sh is a great online tool for finding the DNS information for a domain or subdomain. It can also help identify the domain's owner and contact information.

To use Crt.sh, simply enter the domain name or subdomain you want to lookup into the search bar on the home page. Crt.sh will then display all of the DNS information for that domain, including the domain's owner and contact information.

Exercise:

crt.sh monitors and records Certificate Transparency logs.

Use crt.sh to identify domains and sub-domains that belong to an organization.

Example:

Using the domain monkeytype.com, I looked to see all of the subdomains that belong to the organization.

Editing andrewsmhay's github slightly, I was able to query this site, as well as output all the sub domains using a python script.

Script:

```
# Copyright (c) 2021 Andrew Hay

import requests
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np

#Setting the domain and the request
domain = 'monkeytype.com'
payload = {'q': domain}
req = requests.get('https://crt.sh/?', params=payload)

#Defining the Parser
soup = BeautifulSoup(req.text)
tbody = soup.find_all("table")[2]
td_list = tbody.find_all("td")
features="html.parser" 
pyt
itlist = []
for i in td_list:
    itlist.append(i.text)
itlist = np.asarray(itlist)
itlist = np.split(itlist, len(itlist)/7)

#Gathering the columns that were parsed
df = pd.DataFrame(itlist, columns = ['crt.sh ID','Logged At','Not Before','Not After','Common Name','Matching Identities','Issuer Name'])
print('Rows: ', len(df))

df = df.drop_duplicates(subset=['Common Name'])
print('Rows: ', len(df))

#Print out the domain name
print(df['Common Name'].to_csv(index=False, header=False))
```

Output:

Last updated