SEO python scraper to extract data from major searchengine result pages. Extract data like url, title, snippet, richsnippet and the type from searchresults for given keywords. Detect Ads or make automated screenshots. You can also fetch text content of urls provided in searchresults or by your own. It's usefull for SEO and business related research tasks.
- ads_main - advertisements within regular search results
- image - result from image search
- news - news teaser within regular search results
- results - standard search result
- shopping - shopping teaser within regular search results
- videos - video teaser within regular search results
- domain
- rank
- rich snippet
- site links
- snippet
- title
- type
- url
- visible url
Also get a screenshot of each result page. You can also scrape the text content of each result url. It is also possible to save the results as CSV for future analytics. If required you can also use your own proxylist.
See http://serpscrap.readthedocs.io/en/latest/ for documentation.
Source is available at https://github.com/ecoron/SerpScrap
The easy way to do:
pip uninstall SerpScrap -y
pip install SerpScrap --upgrade
More details in the install [1] section of the documentation.
SerpScrap in your applications
#!/usr/bin/python3
# -*- coding: utf-8 -*-
import pprint
import serpscrap
keywords = ['example']
config = serpscrap.Config()
config.set('scrape_urls', False)
scrap = serpscrap.SerpScrap()
scrap.init(config=config.get(), keywords=keywords)
results = scrap.run()
for result in results:
pprint.pprint(result)
More detailes in the examples [2] section of the documentation.
To avoid encode/decode issues use this command before you start using SerpScrap in your cli.
chcp 65001
set PYTHONIOENCODING=utf-8
- SerpScrap should work on Linux, Windows and Mac OS with installed Python >= 3.4
- SerpScrap requieres lxml
- Doesn't work on iOS
Notes about major changes between releases
- updated dependencies: chromedriver >= 76.0.3809.68 to use actual driver, sqlalchemy>=1.3.7 to solve security issues and other minor update changes
- minor changes install_chrome.sh
I recommend an update to the latest version of SerpScrap, because the searchengine has updated the markup of search result pages(serp)
- Update and cleanup of selectors to fetch results
- new resulttype videos
- Chrome headless is now the default browser, usage of phantomJS is deprecated
- chromedriver is installed on the first run (tested on Linux and Windows. Mac OS should also work)
- behavior of scraping raw text contents from serp urls, and of course given urls, has changed
- run scraping of serp results and contents at once
- csv output format changed, now it's tab separated and quoted
- support for headless chrome, adjusted default time between scrapes
- result types added (news, shopping, image)
- Image search is supported
- text processing tools removed.
- less requirements
SerpScrap is using Chrome headless [3] and lxml [4] to scrape serp results. For raw text contents of fetched URL's, it is using beautifulsoup4 [5] . SerpScrap also supports PhantomJs [6] ,which is deprecated, a scriptable headless WebKit, which is installed automaticly on the first run (Linux, Windows). The scrapcore was based on GoogleScraper [7] , an outdated project, and has many changes and improvemts.
[1] | http://serpscrap.readthedocs.io/en/latest/install.html |
[2] | http://serpscrap.readthedocs.io/en/latest/examples.html |
[3] | http://chromedriver.chromium.org/ |
[4] | https://lxml.de/ |
[5] | https://www.crummy.com/software/BeautifulSoup/ |
[6] | https://github.com/ariya/phantomjs |
[7] | https://github.com/NikolaiT/GoogleScraper |