Early narrative explaining results as of May 13, 2022 (presentation for OPA Research):

Whiteboard explainer video:

The below is automated daily data cultivation with automated daily aggregate analysis in Wolfram Mathematica:

Daily Data [click to download csv file]

To download or use a copy of the code immediately above, click on the three lines in the bottom left and then choose "make your own copy" or "download". There is a free time-limited version of Mathematica that you can use with the download, or making your own copy will open up an online version of the software that you can use for free.

Finally, use the below to cultivate the daily asteroid namesake article counts:

Download names_to_search_fiverr_a.csv

Python Code for Daily Data Cultivation

import requests
from bs4 import BeautifulSoup
import pandas as pd
from datetime import date, timedelta

# dataset containing 1200 names
df = pd.read_csv('names to search Fiverr A.csv')

# creating a list of all 1200 names
all_names = df['NAME'].values

input_date0 = date.today() - timedelta(days=1)  # gives yesterday's date
input_date = input_date0.isoformat()

# Generating URLs from given list of names
def get_url(name):
    url_template = 'https://news.google.com/search?q={}'
    return url_template.format(name)

# Scraping news title and date
def get_news(article, name):
    title_date = article.div.div.time.get('datetime').split('T')[0]
    if title_date == input_date:
        all_data = (title_date, name)
    return all_data

# Main function to run all code
main_list = []
def Main_task():
    for news_name in all_names:
        records = []
        count = 0
        url = get_url(news_name)
        response = requests.get(url)
        soup = BeautifulSoup(response.text, 'html.parser')
        articles = soup.find_all('article', 'redacted')

        for article in articles:
            try:
                all_data = get_news(article, news_name)
                records.append(all_data)
            except:
                continue
        count = len(records)
        main_list.append((news_name, count))

Main_task()
mynamedata = pd.DataFrame(main_list, columns=['NAMES', input_date])
mynamedata.to_csv(input_date + '.csv')