Downloading AppVeyor artifacts with a little bit of Python

I have recently released new versions of 3 of my Python modules (pyuv, pycares and python-fibers), which happen to be Python C extensions.

While preparing these releases, I decided to give AppVeyor a try, since it can be used for both integration testing on Windows and Python Wheels generation. I managed to do so following these instructions and checking this project example, and I was (almost) all set.

The missing part was to download all those built artifacts (the Python wheels) stored in AppVeyor and upload them to PyPI when I decided to make a release. Uploading the wheels can be easily done using twine, and for downloading the last built artifacts for a given project I created the following simple Python script using requests:

#!/usr/bin/env python
# coding=utf-8
# Author: Saúl Ibarra Corretgé <saghul@gmail.com>
# License: MIT
import argparse
import multiprocessing
import requests
from concurrent.futures import ThreadPoolExecutor
BASE_URL = 'https://ci.appveyor.com/api'
def download_file(url):
local_filename = url.split('/')[1]
r = requests.get(url, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
def get_file_urls(options):
session = requests.Session()
session.headers.update({'authorization': 'bearer %s' % options.api_token})
data = session.get(BASE_URL + '/projects/' + options.user + '/' + options.project)
data = data.json()
for job in (job['jobId'] for job in data['build']['jobs']):
job_url = BASE_URL + '/buildjobs/' + job + '/artifacts'
data = session.get(job_url)
data = data.json()
for item in data:
file_url = job_url + '/' + item['fileName']
yield file_url
def main(options):
with ThreadPoolExecutor(max_workers=multiprocessing.cpu_count()) as e:
for url in get_file_urls(options):
e.submit(download_file, url)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='AppVeyor artifact downloader')
parser.add_argument('–api-token', required=True)
parser.add_argument('–user', required=True)
parser.add_argument('–project', required=True)
args = parser.parse_args()
main(args)

view raw
appveyor-download
hosted with ❤ by GitHub

Using it is simple:

appveyor-download --api-token 1234 --user saghul --project pyuv

I hope you find it useful!

:wq

 

erequests 0.4.0 released!

A new version of ERequests, the library that makes it easy to use Requests and Eventlet has just been released!

The API has been overhauled in order to provide 2 different ways of doing things:

  • A synchronous API, which will spawn a green thread and wait for it for every request sent this way
  • An asynchronous API, which just prepares the requests, allowing the user to throttle them with map or imap

Another important improvement is that when sending requests with map/imap, if one of them fails with an error, the exception object is returned, instead of raising it and stopping the process in the middle.

Here is an example script showing both APIs:

import erequests
urls = [
'http://www.heroku.com',
'http://tablib.org',
'http://httpbin.org',
'http://python-requests.org',
'http://kennethreitz.com'
]
# sync API (spawns a new green thread and waits for it)
for url in urls:
print erequests.get(url)
# async API (prepares the requests, they are sent with map/imap)
reqs = [erequests.async.get(url) for url in urls]
print list(erequests.imap(reqs))
view raw t.py hosted with ❤ by GitHub

Last but not least, I’d like to thank Juan Riaza for his help in designing the API and adapting the tests. 🙂

You can install erequests easily from PyPI:

pip install erequests

Enjoy!

:wq