The milestone number comes ten years after Google revealed its first site tally, which recorded some 26 million sites. By 2000, that number had grown to one billion.
Software engineers Jesse Alpert and Nissan Hajaj said in a company blog posting that the figure only applies to unique URL addresses, and not actual web pages.
"Strictly speaking, the number of pages out there is infinite," they explained.
"For example, web calendars may have a 'next day' link, and we could follow that link forever, each time finding a 'new' page."
As the web has expanded, so have the requirements for indexing. The pair said that in its early days, Google could process and rank each of the 26 million pages on the web with a single workstation.
These days, the task of calculating page rank is equal to indexing and ordering every single intersection in the US, multiplied by 50,000.
"To keep up with this volume of information, our systems have come a long way since the first set of web data Google processed to answer queries," wrote Alpert and Hajaj.
"Today, Google downloads the web continuously, collecting updated page information and re-processing the entire web-link graph several times per day."