Skip to content
This repository has been archived by the owner on Jul 17, 2023. It is now read-only.

Releases: IlyaFaer/GitHub-Scraper

0.85.0 - beta

15 May 09:57
Compare
Choose a tag to compare
  • Feature: add archive functionality. Tweak fill_funcs.to_be_archived() to designate if issue should be archived. Archived issues are not tracked. Archive sheet architecture can be configured with config.ARCHIVE_SHEET constant
  • Feature: detect related PRs from adjacent repositories (those tracked on the same sheet)
  • Bugfix: if issue became 404, delete it from table
  • Bugfix: avoid stopping an update in case of no updated issues

0.84.0 - beta

29 Mar 11:23
Compare
Choose a tag to compare
  • Feature: save PRs and issues last updates timestamps into file, and use it in case of Scraper restarted. With this Scraper will process only issues/PRs updated after last filling instead of processing the whole repositories. Thus, first-filling-after-restart time will be greatly reduced.
  • Feature: GitHub credentials will be asked with a console, so you don't need to manually create a file with login/password.
  • Bugfix: if issue became 404 after adding into spreadsheet, filling fails.
  • Bugfix: check if issue is in index before trying to delete it.

0.83.2 - beta

07 Mar 10:13
Compare
Choose a tag to compare
  • Bugfix: if there were no updated issues in repository, and Scraper is trying to read an old issue, error fails.
  • Bugfix: KeyError on a second update in case of no issues were updated in repo from the beginning of the updation cycle.

0.83.1 - beta

05 Mar 13:00
2063265
Compare
Choose a tag to compare
  • Bugfix: on a second update after Scraper start old issues could appear in table with "New" status.

0.83.0 - beta

03 Mar 14:37
Compare
Choose a tag to compare
  • Id system improved: now users don't need to keep Repository column in their tables
  • Function to_be_ignored() added into fill_funcs.py - use it to ignore issues with your own if-statements
  • Configurations reloading code moved into Spreadsheet() class, so there is no more need to think about it
  • requirements.txt file added to make Scraper installation easier
  • Got rid of contants reloading
  • Bug fixed: on a first update after start last updated issue date has been recording not correctly, so old and closed issues were appearing in tables

0.82.0 - beta

21 Feb 08:18
Compare
Choose a tag to compare
  • Added system to check if configurations changed since the last update. If not, spreadsheet structure will not be updated. This helps to reduce number of outgoing requests in case if your spreadsheet structure is not frequently updated. However, you can use Spreadsheet.update_structure(force=True) to update it whether configurations were updated or not
  • Added progress logging for issues and PRs processing in big repositories
  • sort_func() and designate_status_color() functions moved into fill_funcs.py
  • Fixed bug: if sheet has been deleted from configurations, Spreadsheet.update_structure() will fail with KeyError
  • Erased redundant "Internal PRs" system
  • Reduced number of objects conversions within Scraper

0.81.0 - beta

10 Feb 18:26
Compare
Choose a tag to compare
  • New Sheet() class added
  • Spreadsheet() got new attribute sheets - list, in which all of the sheets are contained
  • Limit of 1000 rows for clearing and formatting requests is no more actual

0.80.0 - beta

31 Jan 10:08
25269b7
Compare
Choose a tag to compare

First GitHub-Scraper beta version