Using Python to Quickly Check Keyword Results

Updated March 03, 2022


I get a lot of emails about this old python script so I figured Id give it a quick update.

Ill come back through soon and do a more thorough update regarding what does what, but for now, here is the updated code:

I wrote this quick console app in python to quickly grab results for a given keyword. Usage is fairly straight-forward:

1. Copy paste the below code into a new file – lets call it

2. Open up your terminal and cd into the directory you saved

3. Run…


4. Type your keyword directly in the terminal window when prompted and hit enter. Your results will be shown in a few seconds.

					import requests
from bs4 import BeautifulSoup

USER_AGENT = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0'}

def search(search_term=input('type in your keyword and press enter \n'), number_results=100,
    escaped_search_term = search_term.replace(' ', '+')
    google_url = f'{escaped_search_term}&num={number_results}&hl={language_code}'
    response = requests.get(google_url, headers=USER_AGENT).text
    soup = BeautifulSoup(response, 'html.parser')
    for link in soup.find_all('a'):
        raw_url = link.get('href')
        title = link.getText()
        final_result = [title, raw_url]
    print("all done!")

…The old code (for posterity). Don’t use this, but learn from it.

					from urllib.parse import urlencode, urlparse, parse_qs
from lxml.html import fromstring
import csv
from requests import get

def search():
    user_query = input('type keywords to search: \n')
    raw = get("" + user_query).text
    page = fromstring(raw)
    links = page.cssselect('.r a')
    for row in links:
        raw_url = row.get('href')
        title = row.text_content()
        if raw_url.startswith("/url?"):
            url = parse_qs(urlparse(raw_url).query)['q']
            result_row = [title, url[0]]