Overview¶
Alfred-Workflow is a Python helper library for Alfred 2 workflow authors, developed and hosted on GitHub.
Alfred workflows typically take user input, fetch data from the Web or elsewhere, filter them and display results to the user. Alfred-Workflow takes care of a lot of the details for you, allowing you to concentrate your efforts on your workflow’s functionality.
Alfred-Workflow supports OS X 10.6+ (Python 2.6 and 2.7)
Features¶
- Catches and logs workflow errors for easier development and support
- “Magic” arguments to help development, debugging and management of the workflow
- Auto-saves settings
- Super-simple data caching
- Fuzzy, Alfred-like search/filtering with diacritic folding
- Keychain support for secure storage (and syncing) of passwords, API keys etc.
- Simple generation of Alfred feedback (XML output)
- Input/output decoding for handling non-ASCII text
- Lightweight web API with Requests-like interface
- Pre-configured logging
- Painlessly add directories to
sys.path
- Easily launch background tasks (daemons) to keep your workflow responsive
- Check for and install new workflow versions using GitHub releases.
Quick example¶
Here’s how to show recent Pinboard.in posts in Alfred.
Create a new workflow in Alfred’s preferences. Add a Script Filter with
Language /usr/bin/python
and paste the following into the Script field
(changing API_KEY
):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 | import sys
from workflow import Workflow, ICON_WEB, web
API_KEY = 'your-pinboard-api-key'
def main(wf):
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=API_KEY, count=20, format='json')
r = web.get(url, params)
r.raise_for_status()
for post in r.json()['posts']:
wf.add_item(post['description'], post['href'], arg=post['href'],
uid=post['hash'], valid=True, icon=ICON_WEB)
wf.send_feedback()
if __name__ == u"__main__":
wf = Workflow()
sys.exit(wf.run(main))
|
Add an Open URL action to your workflow with {query}
as the URL,
connect your Script Filter to it, and you can now hit ENTER on a
Pinboard item in Alfred to open it in your browser.
Warning
Using the above example code as a workflow will likely get you banned by the Pinboard API. See the Tutorial if you want to build an API terms-compliant (and super-fast) Pinboard workflow.
Installation¶
Alfred-Workflow can be installed from the Python Package Index with pip or from the source on GitHub.
Installation¶
Alfred-Workflow can be installed from the Python Package Index with pip or from the source code on GitHub.
pip / PyPi¶
You can install Alfred-Workflow directly into your workflow with:
pip install --target=/path/to/my/workflow Alfred-Workflow
Important
If you intend to distribute your workflow to other users, you should include Alfred-Workflow (and other non-standard Python libraries your workflow requires) within your workflow as described above. Do not ask users to install anything into their system Python. That way lies broken software.
GitHub¶
Download the alfred-workflow-X.X.X.zip
file from the GitHub releases page
and either extract the ZIP to the root directory of your workflow (where
info.plist
is) or place the ZIP in the root directory and add
sys.path.insert(0, 'alfred-workflow-X.X.X.zip')
to the top of your Python
scripts.
Important
background
and update
will not work
if you are importing Alfred-Workflow from a zip file.
If you need to use background
or the self-updating
functionality, you must extract the zip archive.
Alternatively, you can download
the source code
from the GitHub repository and
copy the workflow
subfolder to the root directory of your workflow.
Your Workflow directory should look something like this (where
yourscript.py
contains your workflow code and info.plist
is
the workflow information file generated by Alfred):
Your Workflow/
info.plist
icon.png
workflow/
__init__.py
background.py
update.py
version
workflow.py
web.py
yourscript.py
etc.
Or like this:
Your Workflow/
info.plist
icon.png
workflow-1.X.X.zip
yourscript.py
etc.
The Alfred-Workflow Tutorial¶
A two-part tutorial on writing an Alfred workflow with Alfred-Workflow, taking you through the basics to a performant and release- ready workflow. This is the best starting point for workflow authors new to Python or programming in general. More experienced Python coders should skim this or skip straight ahead to the User Manual.
Tutorial¶
This is a two-part tutorial on writing an Alfred 2 workflow with Alfred-Workflow, taking you through the basics to a full-featured workflow ready to share with the world.
Part 1: A Basic Pinboard Workflow¶
In which we build an Alfred workflow to view recent posts to Pinboard.
If you’re new to Alfred and/or coding in general, start here.
Part 1: A Basic Pinboard Workflow¶
In which we build an Alfred workflow to view recent posts to Pinboard.in.
Note
To use workflows, you must own Alfred’s Powerpack.
Creating a new Workflow¶
First, create a new, blank workflow in Alfred 2’s Preferences, under the Workflows tab:

Describing your Workflow¶
When the info dialog pops up, give your workflow a name, a Bundle Id, and possibly a description.
Important
The Bundle Id is essential: it’s the unique name used by Alfred and Alfred-Workflow internally to identify your workflow. Alfred-Workflow won’t work without it.
You can also drag an image to the icon field to the left to make your workflow pretty (Alfred will use this icon to show your workflow actions in its action list). I grabbed a free Pinboard icon.

Adding a Script Filter¶
The next step is to add a Script Filter. Script Filters receive input from Alfred (the query entered by the user) and send results back to Alfred. They should run as quickly as possible because Alfred will try to call the Script Filter for every character typed into its query box:

And enter the details for this action (the Escaping options don’t matter at the moment because our script currently doesn’t accept a query):

Choose a Keyword, which you will enter in Alfred to activate your workflow. At the moment, our Script Filter won’t take any arguments, so choose No Argument. The Placeholder Title and Subtext are what Alfred will show when you type the Keyword:

The “Please Wait” Subtext is what is shown when your workflow is working, which in our case means fetching data from pinboard.in.
Very importantly, set the Language to /bin/bash
.
The Script field should contain:
python pinboard.py
We’re going to create the pinboard.py
script in a second. The Escaping
options don’t matter for now because our Script Filter doesn’t accept an
argument.
Note
You can choose /usr/bin/python
as the Language and paste
your Python code into the Script box, but this isn’t the best idea.
If you do this, you can’t run the script from the Terminal (which can be helpful when developing/debugging), and you can’t as easily use a proper code editor, which makes debugging difficult: Python always tells you which line an error occurred on, but the Script field doesn’t show line numbers, so lots of counting is involved.
Now Alfred has created the workflow, we can open it up and add our script. Right-click on your workflow in the list on the left and choose Show in Finder.

The directory will show one or two files (depending on whether or not you chose an icon):

At this point, download the latest release of Alfred-Workflow from GitHub,
extract it and copy the workflow
directory into your workflow’s directory:

Now we can start coding.
Writing your Python script¶
Using your text editor of choice [1], create a new text file and save it in your
workflow directory as pinboard.py
(the name we used when setting up the
Script Filter).
Add the following code to pinboard.py
(be sure to change API_KEY
to
your pinboard API key. You can find it on the settings/password page):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | # encoding: utf-8
import sys
from workflow import Workflow, ICON_WEB, web
API_KEY = 'your-pinboard-api-key'
def main(wf):
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=API_KEY, count=20, format='json')
r = web.get(url, params)
# throw an error if request failed
# Workflow will catch this and show it to the user
r.raise_for_status()
# Parse the JSON returned by pinboard and extract the posts
result = r.json()
posts = result['posts']
# Loop through the returned posts and add an item for each to
# the list of results for Alfred
for post in posts:
wf.add_item(title=post['description'],
subtitle=post['href'],
icon=ICON_WEB)
# Send the results to Alfred as XML
wf.send_feedback()
if __name__ == u"__main__":
wf = Workflow()
sys.exit(wf.run(main))
|
All being well, our workflow should now work. Fire up Alfred, enter your keyword and hit ENTER. You should see something like this:

If something went wrong (e.g. an incorrect API key, as in the screenshot), you should see an error like this:

If Alfred shows nothing at all, it probably couldn’t run your Python script at all. You’ll have to open the workflow directory in Terminal and run the script by hand to see the error:
python pinboard.py
Adding workflow actions¶
So now we can see a list of recent posts in Alfred, but can’t do anything with them. We’re going to change that and make the items “actionable” (i.e. you can hit ENTER on them and something happens, in this case, the page will be opened in your browser).
Add the highlighted lines (27–28) to your pinboard.py
file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | # encoding: utf-8
import sys
from workflow import Workflow, ICON_WEB, web
API_KEY = 'your-pinboard-api-key'
def main(wf):
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=API_KEY, count=20, format='json')
r = web.get(url, params)
# throw an error if request failed
# Workflow will catch this and show it to the user
r.raise_for_status()
# Parse the JSON returned by pinboard and extract the posts
result = r.json()
posts = result['posts']
# Loop through the returned posts and add an item for each to
# the list of results for Alfred
for post in posts:
wf.add_item(title=post['description'],
subtitle=post['href'],
arg=post['href'],
valid=True,
icon=ICON_WEB)
# Send the results to Alfred as XML
wf.send_feedback()
if __name__ == u"__main__":
wf = Workflow()
sys.exit(wf.run(main))
|
valid=True
tells Alfred that the item is actionable and arg
is the
value it will pass to the next action (in this case a URL).
Go back to Alfred’s Preferences and add an Open URL action:

Then enter {query}
as the URL:

When you hover your mouse over the Script Filter, you’ll notice a small “nub” appears on the right-hand side:

Click and hold on this, and drag a connection to the Open URL action:

Now run your workflow again in Alfred, select one of the results and hit ENTER. The post’s webpage should open in your default browser.
Improving performance and not getting banned¶
The terms of use of the Pinboard API specifically limit calls to the recent
posts method to 1 call/minute. As it’s
likely you’ll call your workflow more often than that, we need to cache the
results from the API and use the cached data for at least a minute.
Alfred-Workflow makes this a doddle with its
cached_data()
method.
Go back to pinboard.py
and make the following changes:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | # encoding: utf-8
import sys
from workflow import Workflow, ICON_WEB, web
API_KEY = 'your-pinboard-api-key'
def get_recent_posts():
"""Retrieve recent posts from Pinboard.in
Returns a list of post dictionaries.
"""
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=API_KEY, count=20, format='json')
r = web.get(url, params)
# throw an error if request failed
# Workflow will catch this and show it to the user
r.raise_for_status()
# Parse the JSON returned by pinboard and extract the posts
result = r.json()
posts = result['posts']
return posts
def main(wf):
# Retrieve posts from cache if available and no more than 60
# seconds old
posts = wf.cached_data('posts', get_recent_posts, max_age=60)
# Loop through the returned posts and add an item for each to
# the list of results for Alfred
for post in posts:
wf.add_item(title=post['description'],
subtitle=post['href'],
arg=post['href'],
valid=True,
icon=ICON_WEB)
# Send the results to Alfred as XML
wf.send_feedback()
|
We’ve moved the code that retrieves the data from the API to a separate
function (get_recent_posts()
, line 9) and instead we ask
Workflow.cached_data()
(line 33)
for the data cached under the name posts
(the first argument).
cached_data()
will first check its cache for
data saved under posts
and return those data
if they’re less than max_age
seconds old. If the data are older or don’t
exist, it will call the get_recent_posts()
function passed as the second
parameter, cache the data returned by that function under the name posts
and return it.
So now we won’t get banned by Pinboard for hammering the API, and as a bonus,
the workflow is now blazingly fast when the data are in its cache. For this
reason, it’s probably a good idea to increase max_age
to 300 or 600 seconds
(5 or 10 minutes) or even more—depending on how often you add new posts
to Pinboard—to get super-fast results more often.
Making the posts searchable¶
What if you’re looking for a specific post? Who’s got time to scroll through a list of 20 results? Let’s make them searchable.
First, update the Script Filter settings. Next to Keyword, change
No Argument to Argument Optional and select with space.
with space means that when you hit ENTER or TAB on your workflow
action, Alfred will add a space after it, so you can start typing your query
immediately. Then add "{query}"
in the Script text field. {query}
will be replaced by Alfred with whatever you’ve typed after the keyword. Finally,
set the Escaping options to:
- Backquotes
- Double Quotes
- Dollars
- Backslashes
and nothing else. This ensures that the query reaches your Python script
unmolested by bash
. Your Script Filter settings should now look like
this:

First, we’ll set the script to get 100 recent posts from Pinboard (the maximum allowed) in line 16 and to cache them for 10 minutes in line 33 (or use 300 seconds for 5 minutes if you’re a heavy Pinboardista):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | # encoding: utf-8
import sys
from workflow import Workflow, ICON_WEB, web
API_KEY = 'your-pinboard-api-key'
def get_recent_posts():
"""Retrieve recent posts from Pinboard.in
Returns a list of post dictionaries.
"""
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=API_KEY, count=100, format='json')
r = web.get(url, params)
# throw an error if request failed
# Workflow will catch this and show it to the user
r.raise_for_status()
# Parse the JSON returned by pinboard and extract the posts
result = r.json()
posts = result['posts']
return posts
def main(wf):
# Retrieve posts from cache if available and no more than 600
# seconds old
posts = wf.cached_data('posts', get_recent_posts, max_age=600)
# Loop through the returned posts and add an item for each to
# the list of results for Alfred
for post in posts:
wf.add_item(title=post['description'],
subtitle=post['href'],
arg=post['href'],
valid=True,
icon=ICON_WEB)
# Send the results to Alfred as XML
wf.send_feedback()
if __name__ == u"__main__":
wf = Workflow()
sys.exit(wf.run(main))
|
Then we need to add the ability to receive the query from Alfred and filter our posts based on it:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | # encoding: utf-8
import sys
from workflow import Workflow, ICON_WEB, web
API_KEY = 'your-pinboard-api-key'
def get_recent_posts():
"""Retrieve recent posts from Pinboard.in
Returns a list of post dictionaries.
"""
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=API_KEY, count=100, format='json')
r = web.get(url, params)
# throw an error if request failed
# Workflow will catch this and show it to the user
r.raise_for_status()
# Parse the JSON returned by pinboard and extract the posts
result = r.json()
posts = result['posts']
return posts
def search_key_for_post(post):
"""Generate a string search key for a post"""
elements = []
elements.append(post['description']) # title of post
elements.append(post['tags']) # post tags
elements.append(post['extended']) # description
return u' '.join(elements)
def main(wf):
# Get query from Alfred
if len(wf.args):
query = wf.args[0]
else:
query = None
# Retrieve posts from cache if available and no more than 600
# seconds old
posts = wf.cached_data('posts', get_recent_posts, max_age=600)
# If script was passed a query, use it to filter posts
if query:
posts = wf.filter(query, posts, key=search_key_for_post)
# Loop through the returned posts and add an item for each to
# the list of results for Alfred
for post in posts:
wf.add_item(title=post['description'],
subtitle=post['href'],
arg=post['href'],
valid=True,
icon=ICON_WEB)
# Send the results to Alfred as XML
wf.send_feedback()
if __name__ == u"__main__":
wf = Workflow()
sys.exit(wf.run(main))
|
Looking at main()
first, we add a query
variable (lines 40–44).
Because our Script Filter can run with or without an argument, we test to see
if any were passed to the script using via args
attribute of Workflow
, and grab the first one if there were
(this will be the contents of {query}
from the Script Filter).
Using args
is similar to accessing
sys.argv[1:]
directly, but additionally decodes the arguments to Unicode
and normalizes them. It also enables “Magic” arguments.
After getting all the posts from the cache or Pinboard, we then filter them
using the Workflow.filter()
method
if there is a query
(lines 51–52).
Workflow.filter()
implements an
Alfred-like search algorithm (e.g. “am” will match “Activity Monitor” as well
as “I Am Legend”), but it needs a string to search. Therefore, we write the
search_key_for_post()
(line 29) function that will build a searchable string
for each post, comprising its title, tags and description (in that order).
Important
In the last line of search_key_for_post()
, we join the elements with
u' '
(a Unicode space), not ' '
(a byte-string space). The
web.Response.json()
method returns
Unicode (as do most Alfred-Workflow methods and functions), and mixing
Unicode and byte-strings will cause a fatal error if the byte-string
contains non-ASCII characters. In this particular situation, using a
byte-string space wouldn’t cause any problems (a space is ASCII), but
avoiding mixing byte-strings and Unicode is a very good habit to get into.
When coding in Python 2, you have to be aware of which strings are Unicode and which are encoded (byte) strings. Best practice is to use Unicode internally and decode all text to Unicode when it arrives in your workflow (from the Web, system etc.).
Alfred-Workflow’s APIs use Unicode and it works hard to hide as much of
the complexity of working with byte-strings and Unicode as possible, but
you still need to manually decode encoded byte-strings from other sources
with Workflow.decode()
to avoid
fatal encoding errors.
See Encoded strings and Unicode in the User Manual for more information on dealing with encoded (byte) strings and Unicode in workflows.
If you’ve been trying out the workflow, you’ve probably noticed that your queries
match a lot of posts they really shouldn’t. The reason for this is that,
by default, Workflow.filter()
matches
anything that contains all the characters of query
in the same order,
regardless of case. To fix this, we’ll add a min_score
argument to
Workflow.filter()
. Change the line:
posts = wf.filter(query, posts, key=search_key_for_post)
to:
posts = wf.filter(query, posts, key=search_key_for_post, min_score=20)
and try the workflow again. The junk results should be gone. You can adjust
min_score
up or down depending on how strict you want to be with the results.
What now?¶
So we’ve got a working workflow, but it’s not yet ready to be distributed to other users (we can’t reasonably ask users to edit the code to enter their API key, especially as they’d have to do it again after updating the workflow to a new version). We’ll turn what we’ve got into a distribution-ready workflow in the second part of the tutorial.
For more information about writing Alfred workflows, try the following:
- A good tutorial on Alfred workflows for beginners by Richard Guay
- The Alfred Forum. It’s a good place to find workflows and the Workflow Help & Questions forum forum is the best place to get help with writing workflows.
To learn more about coding in Python, try these resources:
- The Python Tutorial is a good place to start learning (more) about Python programming.
- Dive into Python by the dearly departed (from the Web) Mark Pilgrim is a wonderful (and free) book.
- Learn Python the Hard Way isn’t as hard as it sounds. It’s actually rather excellent, in fact.
[1] | Do not use TextEdit to edit code. By default it uses “smart” quotes, which will break code. If you have OS X 10.7 or later, TextMate is an excellent and free editor. TextWrangler is another good, free editor for OS X (supports 10.6). |
Part 2: A Distribution-Ready Pinboard Workflow¶
In which we make our Pinboard workflow ready for the masses.
Demonstrates more advanced usage of Alfred-Workflow and a few workflow tricks that might also be of interest to intermediate Pythonistas.
Part 2: A Distribution-Ready Pinboard Workflow¶
In which we create a Pinboard.in workflow ready for mass consumption.
In the first part of the tutorial, we built a useable workflow to view, search and open your recent Pinboard posts. The workflow isn’t quite ready to be distributed to other users, however: we can’t expect them to go grubbing around in the source code like an animal to set their own API keys.
What’s more, an update to the workflow would overwrite their changes.
So now we’re going to edit the workflow so users can add their API key from the
comfort of Alfred’s friendly query box and use
Workflow.settings
to save it in the workflow’s data directory where it won’t get overwritten.
Performing multiple actions from one script¶
To set the user’s API key, we’re going to need a new action. We could write a second script to do this, but we’re going to stick with one script and make it smart enough to do two things, instead. The advantage of using one script is that if you build a workflow with lots of actions, you don’t have a dozen or more scripts to manage.
We’ll start by adding an argument parser (using argparse
[1]) to
main()
and some if-clauses to alter the script’s behaviour depending on the
arguments passed to it by Alfred.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 | # encoding: utf-8
import sys
import argparse
from workflow import Workflow, ICON_WEB, ICON_WARNING, web
def get_recent_posts(api_key):
"""Retrieve recent posts from Pinboard.in
Returns a list of post dictionaries.
"""
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=api_key, count=100, format='json')
r = web.get(url, params)
# throw an error if request failed
# Workflow will catch this and show it to the user
r.raise_for_status()
# Parse the JSON returned by pinboard and extract the posts
result = r.json()
posts = result['posts']
return posts
def search_key_for_post(post):
"""Generate a string search key for a post"""
elements = []
elements.append(post['description']) # title of post
elements.append(post['tags']) # post tags
elements.append(post['extended']) # description
return u' '.join(elements)
def main(wf):
# build argument parser to parse script args and collect their
# values
parser = argparse.ArgumentParser()
# add an optional (nargs='?') --setkey argument and save its
# value to 'apikey' (dest). This will be called from a separate "Run Script"
# action with the API key
parser.add_argument('--setkey', dest='apikey', nargs='?', default=None)
# add an optional query and save it to 'query'
parser.add_argument('query', nargs='?', default=None)
# parse the script's arguments
args = parser.parse_args(wf.args)
####################################################################
# Save the provided API key
####################################################################
# decide what to do based on arguments
if args.apikey: # Script was passed an API key
# save the key
wf.settings['api_key'] = args.apikey
return 0 # 0 means script exited cleanly
####################################################################
# Check that we have an API key saved
####################################################################
api_key = wf.settings.get('api_key', None)
if not api_key: # API key has not yet been set
wf.add_item('No API key set.',
'Please use pbsetkey to set your Pinboard API key.',
valid=False,
icon=ICON_WARNING)
wf.send_feedback()
return 0
####################################################################
# View/filter Pinboard posts
####################################################################
query = args.query
# Retrieve posts from cache if available and no more than 600
# seconds old
def wrapper():
"""`cached_data` can only take a bare callable (no args),
so we need to wrap callables needing arguments in a function
that needs none.
"""
return get_recent_posts(api_key)
posts = wf.cached_data('posts', wrapper, max_age=600)
# If script was passed a query, use it to filter posts
if query:
posts = wf.filter(query, posts, key=search_key_for_post, min_score=20)
# Loop through the returned posts and add a item for each to
# the list of results for Alfred
for post in posts:
wf.add_item(title=post['description'],
subtitle=post['href'],
arg=post['href'],
valid=True,
icon=ICON_WEB)
# Send the results to Alfred as XML
wf.send_feedback()
return 0
if __name__ == u"__main__":
wf = Workflow()
sys.exit(wf.run(main))
|
Quite a lot has happened here: at the top in line 5, we’re importing a couple
more icons that we use in main()
to notify the user that their API key is
missing and that they should set it (lines 65–72).
(You can see a list of all supported icons here.)
We’ve adapted get_recent_posts()
to accept an api_key
argument. We could
continue to use the API_KEY
global variable, but that’d be bad form.
As a result of this, we’ve had to alter the way
Workflow.cached_data()
is
called. It can’t call a function that requires any arguments, so we’ve added a
wrapper()
function within main()
(lines 82–87) that calls
get_recent_posts()
with the necessary api_key
arguments, and we pass
this wrapper()
function (which needs no arguments) to
Workflow.cached_data()
instead
(line 89).
At the top of main()
(lines 39–49), we’ve added an argument parser using
argparse
that can take an optional --apikey APIKEY
argument
and an optional query
argument (remember the script doesn’t require a query).
Then, in lines 55–59, we check if an API key was passed using --apikey
.
If it was, we save it using settings
(see below).
Once this is done, we exit the script.
If no API key was specified with --apikey
, we try to show/filter Pinboard
posts as before. But first of all, we now have to check to see if we already
have an API key saved (lines 65–72). If not, we show the user a warning
(No API key set) and exit the script.
Finally, if we have an API key saved, we retrieve it and show/filter the Pinboard posts just as before (lines 78–107).
Of course, we don’t have an API key saved, and we haven’t yet set up our workflow in Alfred to save one, so the workflow currently won’t work. Try to run it, and you’ll see the warning we just implemented:

So let’s add that functionality now.
Multi-step actions¶
Asking the user for input and saving it is best done in two steps:
- Ask for the data.
- Pass it to a second action to save it.
A Script Filter is designed to be called constantly by Alfred and return results. This time, we just want to get some data, so we’ll use a Keyword input instead.
Go back to your workflow in Alfred’s Preferences and add a Keyword input:

And set it up as follows (we’ll use the keyword pbsetkey
because that’s what we told the user to use
in the above warning message):

You can now enter pbsetkey
in Alfred and see the following:

It won’t do anything yet, though, as we haven’t connected its output to anything.
Back in Alfred’s Preferences, add a Run Script action:

and point it at our pinboard.py
script with the --setkey
argument:

Finally, connect the pbsetkey
Keyword to the new Run Script action:

Now you can call pbsetkey
in Alfred, paste in your Pinboard API key and hit
ENTER. It will be saved by the workflow and pbrecent
will once again
work as expected. Try it.
It’s a little confusing receiving no feedback on whether the key was saved or not, so go back into Alfred’s Preferences, and add an Output > Post Notification action to your workflow:

In the resulting pop-up, enter a message to be shown in Notification Center:

and connect the Run Script we just added to it:

Try setting your API key again with pbsetkey
and this time you’ll get a
notification that it was saved.
Saving settings¶
Saving the API key was pretty easy (1 line of code). Settings
is a special dictionary that automatically saves itself when you change its
contents. It can be used much like a normal dictionary with the caveat that all
values must be serializable to JSON as the settings are saved as a JSON file in
the workflow’s data directory.
Very simple, yes, but secure? No. A better place to save the API key would be in the user’s Keychain. Let’s do that.
Workflow
provides three methods for managing data
saved in OS X’s Keychain: get_password()
,
save_password()
and delete_password()
.
They are all called with an account
name and an optional service
name
(by default, this is your workflow’s bundle ID
).
Change your pinboard.py
script as follows to use Keychain instead of a JSON
file to store your API key:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 | # encoding: utf-8
import sys
import argparse
from workflow import Workflow, ICON_WEB, ICON_WARNING, web, PasswordNotFound
def get_recent_posts(api_key):
"""Retrieve recent posts from Pinboard.in
Returns a list of post dictionaries.
"""
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=api_key, count=100, format='json')
r = web.get(url, params)
# throw an error if request failed
# Workflow will catch this and show it to the user
r.raise_for_status()
# Parse the JSON returned by pinboard and extract the posts
result = r.json()
posts = result['posts']
return posts
def search_key_for_post(post):
"""Generate a string search key for a post"""
elements = []
elements.append(post['description']) # title of post
elements.append(post['tags']) # post tags
elements.append(post['extended']) # description
return u' '.join(elements)
def main(wf):
# build argument parser to parse script args and collect their
# values
parser = argparse.ArgumentParser()
# add an optional (nargs='?') --apikey argument and save its
# value to 'apikey' (dest). This will be called from a separate "Run Script"
# action with the API key
parser.add_argument('--setkey', dest='apikey', nargs='?', default=None)
# add an optional query and save it to 'query'
parser.add_argument('query', nargs='?', default=None)
# parse the script's arguments
args = parser.parse_args(wf.args)
####################################################################
# Save the provided API key
####################################################################
# decide what to do based on arguments
if args.apikey: # Script was passed an API key
# save the key
wf.save_password('pinboard_api_key', args.apikey)
return 0 # 0 means script exited cleanly
####################################################################
# Check that we have an API key saved
####################################################################
try:
api_key = wf.get_password('pinboard_api_key')
except PasswordNotFound: # API key has not yet been set
wf.add_item('No API key set.',
'Please use pbsetkey to set your Pinboard API key.',
valid=False,
icon=ICON_WARNING)
wf.send_feedback()
return 0
####################################################################
# View/filter Pinboard posts
####################################################################
query = args.query
# Retrieve posts from cache if available and no more than 600
# seconds old
def wrapper():
"""`cached_data` can only take a bare callable (no args),
so we need to wrap callables needing arguments in a function
that needs none.
"""
return get_recent_posts(api_key)
posts = wf.cached_data('posts', wrapper, max_age=600)
# If script was passed a query, use it to filter posts
if query:
posts = wf.filter(query, posts, key=search_key_for_post, min_score=20)
# Loop through the returned posts and add an item for each to
# the list of results for Alfred
for post in posts:
wf.add_item(title=post['description'],
subtitle=post['href'],
arg=post['href'],
valid=True,
icon=ICON_WEB)
# Send the results to Alfred as XML
wf.send_feedback()
return 0
if __name__ == u"__main__":
wf = Workflow()
sys.exit(wf.run(main))
|
get_password()
raises a
PasswordNotFound
exception if the requested
password isn’t in your Keychain, so we import PasswordNotFound
and change if not api_key:
to a try ... except
clause (lines 65–72).
Try running your workflow again. It will complain that you haven’t saved your API key (it’s looking in Keychain now, not the settings), so set your API key once again, and you should be able to browse your recent posts in Alfred once more.
And if you open Keychain Access, you’ll find the API key safely tucked away in your Keychain:

As a bonus, if you have multiple Macs and use iCloud Keychain, the API key will be seamlessly synced across machines, saving you the trouble of setting up the workflow multiple times.
“Magic” arguments¶
Now that the API key is stored in Keychain, we don’t need it saved in the
workflow’s settings any more (and having it there that kind of defeats the
purpose of using Keychain). To get rid of it, we can use one of Alfred-Workflow’s
“magic” arguments: workflow:delsettings
.
Open up Alfred, and enter pbrecent workflow:delsettings
. You should see the
following message:

Alfred-Workflow has recognised one of its “magic” arguments, performed the corresponding action, logged it to the log file, notified the user via Alfred and exited the workflow.
Magic arguments are designed to help coders develop and debug workflows. See “Magic” arguments for more details.
Logging¶
There’s a log, you say? Yup. There’s a logging.Logger
instance at Workflow.logger
configured to output to both the Terminal (in case you’re running your workflow
script in Terminal) and your workflow’s log file. Normally, I use it like this:
1 2 3 4 5 6 7 8 9 10 11 12 | from workflow import Workflow
log = None
def main(wf):
log.debug('Started')
if __name__ == '__main__':
wf = Workflow()
log = wf.logger
wf.run(main)
|
Assigning Workflow.logger
to the
module global log
is just a convenience. You could use wf.logger
in
its place.
Spit and polish¶
So far, the workflow’s looking pretty good. But there are still a few of things that could be better. For one, it’s not necessarily obvious to a user where to find their Pinboard API key (it took me a good, hard Googling to find it while writing these tutorials). For another, it can be confusing if there are no results from a workflow and Alfred shows its fallback Google/Amazon searches instead. Finally, the workflow is unresponsive while updating the list of recent posts from Pinboard. That can’t be helped if we don’t have any posts cached, but apart from the very first run, we always will, so why don’t we show what we have and update in the background?
Let’s fix those issues. The easy ones first.
To solve the first issue (Pinboard API keys being hard to find), we’ll add a
second Keyword input that responds to the same pbsetkey
keyword as our
other action, but this one will just send the user to the Pinboard
password settings page where the API keys are kept.
Go back to your workflow in Alfred’s Preferences and add a new Keyword with the following settings:

Now when you type pbsetkey
into Alfred, you should see two options:

The second action doesn’t do anything yet, of course, because we haven’t connected it to anything. So add an Open URL action in Alfred, enter this URL:
https://pinboard.in/settings/password
and leave all the settings at their defaults.

Finally, connect your new Keyword to the new Open URL action:

Enter pbsetkey
into Alfred once more and try out the new action. Pinboard
should open in your default browser.
Easy peasy.
Alfred’s default behaviour when a Script Filter returns no results is to show its fallback searches. This is also what it does if a workflow crashes. So, the best thing to do when a user is explicitly using your workflow is to show a message indicating that no results were found.
Change pinboard.py
to the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 | # encoding: utf-8
import sys
import argparse
from workflow import Workflow, ICON_WEB, ICON_WARNING, web, PasswordNotFound
def get_recent_posts(api_key):
"""Retrieve recent posts from Pinboard.in
Returns a list of post dictionaries.
"""
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=api_key, count=100, format='json')
r = web.get(url, params)
# throw an error if request failed
# Workflow will catch this and show it to the user
r.raise_for_status()
# Parse the JSON returned by pinboard and extract the posts
result = r.json()
posts = result['posts']
return posts
def search_key_for_post(post):
"""Generate a string search key for a post"""
elements = []
elements.append(post['description']) # title of post
elements.append(post['tags']) # post tags
elements.append(post['extended']) # description
return u' '.join(elements)
def main(wf):
# build argument parser to parse script args and collect their
# values
parser = argparse.ArgumentParser()
# add an optional (nargs='?') --apikey argument and save its
# value to 'apikey' (dest). This will be called from a separate "Run Script"
# action with the API key
parser.add_argument('--setkey', dest='apikey', nargs='?', default=None)
# add an optional query and save it to 'query'
parser.add_argument('query', nargs='?', default=None)
# parse the script's arguments
args = parser.parse_args(wf.args)
####################################################################
# Save the provided API key
####################################################################
# decide what to do based on arguments
if args.apikey: # Script was passed an API key
# save the key
wf.save_password('pinboard_api_key', args.apikey)
return 0 # 0 means script exited cleanly
####################################################################
# Check that we have an API key saved
####################################################################
try:
api_key = wf.get_password('pinboard_api_key')
except PasswordNotFound: # API key has not yet been set
wf.add_item('No API key set.',
'Please use pbsetkey to set your Pinboard API key.',
valid=False,
icon=ICON_WARNING)
wf.send_feedback()
return 0
####################################################################
# View/filter Pinboard posts
####################################################################
query = args.query
# Retrieve posts from cache if available and no more than 600
# seconds old
def wrapper():
"""`cached_data` can only take a bare callable (no args),
so we need to wrap callables needing arguments in a function
that needs none.
"""
return get_recent_posts(api_key)
posts = wf.cached_data('posts', wrapper, max_age=600)
# If script was passed a query, use it to filter posts
if query:
posts = wf.filter(query, posts, key=search_key_for_post, min_score=20)
if not posts: # we have no data to show, so show a warning and stop
wf.add_item('No posts found', icon=ICON_WARNING)
wf.send_feedback()
return 0
# Loop through the returned posts and add an item for each to
# the list of results for Alfred
for post in posts:
wf.add_item(title=post['description'],
subtitle=post['href'],
arg=post['href'],
valid=True,
icon=ICON_WEB)
# Send the results to Alfred as XML
wf.send_feedback()
return 0
if __name__ == u"__main__":
wf = Workflow()
sys.exit(wf.run(main))
|
In lines 96-99, we check to see it there are any posts, and if not, we show the user a warning, send the results to Alfred and exit. This does away with Alfred’s distracting default searches and lets the user know exactly what’s going on.
All that remains is for our workflow to provide the blazing fast results Alfred users have come to expect. No waiting around for glacial web services for the likes of us. As long as we have some posts saved in the cache, we can show those while grabbing an updated list in the background (and notifying the user of the update, of course).
Now, there are a few different ways to start a background process. We could ask
the user to set up a cron
job, but cron
isn’t the easiest software to
use. We could add and load a Launch Agent, but that’d run indefinitely,
whether or not the workflow is being used, and even if the workflow were
uninstalled. So we’d best start our background process from within the workflow
itself.
Normally, you’d use subprocess.Popen
to start a background process,
but that doesn’t necessarily work quite as you might expect in Alfred: it
treats your workflow as still running till the subprocess has finished,
too, so it won’t call your workflow with a new query till the update is done.
Which is exactly what happens now and the behaviour we want to avoid.
Fortunately, Alfred-Workflow provides the background
module
to solve this problem.
Using the background.run_in_background()
and background.is_running()
functions,
we can easily run a script in the background while our workflow remains
responsive to Alfred’s queries.
Alfred-Workflow’s background
module is based on, and uses the
same API as subprocess.call()
, but it runs the command as a background
daemon process (consequently, it won’t return anything). So, our updater script
will be called from our main workflow script, but background
will run it as a background process. This way, it will appear to exit
immediately, so Alfred will keep on calling our workflow every time the query
changes.
Meanwhile, our main workflow script will check if the background updater is running and post a useful, friendly notification if it is.
Let’s have at it.
Create a new file in the workflow root directory called update.py
with these
contents:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | # encoding: utf-8
from workflow import web, Workflow, PasswordNotFound
def get_recent_posts(api_key):
"""Retrieve recent posts from Pinboard.in
Returns a list of post dictionaries.
"""
url = 'https://api.pinboard.in/v1/posts/recent'
params = dict(auth_token=api_key, count=100, format='json')
r = web.get(url, params)
# throw an error if request failed
# Workflow will catch this and show it to the user
r.raise_for_status()
# Parse the JSON returned by pinboard and extract the posts
result = r.json()
posts = result['posts']
return posts
def main(wf):
try:
# Get API key from Keychain
api_key = wf.get_password('pinboard_api_key')
# Retrieve posts from cache if available and no more than 600
# seconds old
def wrapper():
"""`cached_data` can only take a bare callable (no args),
so we need to wrap callables needing arguments in a function
that needs none.
"""
return get_recent_posts(api_key)
posts = wf.cached_data('posts', wrapper, max_age=600)
# Record our progress in the log file
wf.logger.debug('{} Pinboard posts cached'.format(len(posts)))
except PasswordNotFound: # API key has not yet been set
# Nothing we can do about this, so just log it
wf.logger.error('No API key saved')
if __name__ == '__main__':
wf = Workflow()
wf.run(main)
|
At the top of the file (line 7), we’ve copied the get_recent_posts()
function from pinboard.py
(we won’t need it there any more).
The contents of the try
block in main()
(lines 29–44) are once again
copied straight from pinboard.py
(where we won’t be needing them any more).
The except
clause (lines 46–48) is to trap the
PasswordNotFound
error that Workflow.get_password()
will raise if the user hasn’t set their API key via Alfred yet. update.py
can quietly die if no API key has been set because pinboard.py
takes care
of notifying the user to set their API key.
Let’s try out update.py
. Open a Terminal window at the workflow root directory
and run the following:
python update.py
If it works, you should see something like this:
1 2 3 4 | 21:59:59 workflow.py:855 DEBUG get_password : net.deanishe.alfred-pinboard-recent:pinboard_api_key
21:59:59 workflow.py:544 DEBUG Loading cached data from : /Users/dean/Library/Caches/com.runningwithcrayons.Alfred-2/Workflow Data/net.deanishe.alfred-pinboard-recent/posts.cache
21:59:59 update.py:111 DEBUG 100 Pinboard posts cached
22:19:25 workflow.py:371 INFO Opening workflow log file
|
As you can see in the 3rd line, update.py
did its job.
update.py
from pinboard.py
¶So now let’s update pinboard.py
to call update.py
instead of doing the
update itself:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | # encoding: utf-8
import sys
import argparse
from workflow import (Workflow, ICON_WEB, ICON_INFO, ICON_WARNING,
PasswordNotFound)
from workflow.background import run_in_background, is_running
def search_key_for_post(post):
"""Generate a string search key for a post"""
elements = []
elements.append(post['description']) # title of post
elements.append(post['tags']) # post tags
elements.append(post['extended']) # description
return u' '.join(elements)
def main(wf):
# build argument parser to parse script args and collect their
# values
parser = argparse.ArgumentParser()
# add an optional (nargs='?') --apikey argument and save its
# value to 'apikey' (dest). This will be called from a separate "Run Script"
# action with the API key
parser.add_argument('--setkey', dest='apikey', nargs='?', default=None)
# add an optional query and save it to 'query'
parser.add_argument('query', nargs='?', default=None)
# parse the script's arguments
args = parser.parse_args(wf.args)
####################################################################
# Save the provided API key
####################################################################
# decide what to do based on arguments
if args.apikey: # Script was passed an API key
# save the key
wf.save_password('pinboard_api_key', args.apikey)
return 0 # 0 means script exited cleanly
####################################################################
# Check that we have an API key saved
####################################################################
try:
wf.get_password('pinboard_api_key')
except PasswordNotFound: # API key has not yet been set
wf.add_item('No API key set.',
'Please use pbsetkey to set your Pinboard API key.',
valid=False,
icon=ICON_WARNING)
wf.send_feedback()
return 0
####################################################################
# View/filter Pinboard posts
####################################################################
query = args.query
# Get posts from cache. Set `data_func` to None, as we don't want to
# update the cache in this script and `max_age` to 0 because we want
# the cached data regardless of age
posts = wf.cached_data('posts', None, max_age=0)
# Start update script if cached data is too old (or doesn't exist)
if not wf.cached_data_fresh('posts', max_age=600):
cmd = ['/usr/bin/python', wf.workflowfile('update.py')]
run_in_background('update', cmd)
# Notify the user if the cache is being updated
if is_running('update'):
wf.add_item('Getting new posts from Pinboard',
valid=False,
icon=ICON_INFO)
# If script was passed a query, use it to filter posts if we have some
if query and posts:
posts = wf.filter(query, posts, key=search_key_for_post, min_score=20)
if not posts: # we have no data to show, so show a warning and stop
wf.add_item('No posts found', icon=ICON_WARNING)
wf.send_feedback()
return 0
# Loop through the returned posts and add a item for each to
# the list of results for Alfred
for post in posts:
wf.add_item(title=post['description'],
subtitle=post['href'],
arg=post['href'],
valid=True,
icon=ICON_WEB)
# Send the results to Alfred as XML
wf.send_feedback()
return 0
if __name__ == u"__main__":
wf = Workflow()
sys.exit(wf.run(main))
|
First of all, we’ve changed the imports a bit. We no longer need
workflow.web
, because we’ll use the functions
run_in_background()
from workflow.background
to call update.py
instead, and we’ve also imported another icon
(ICON_INFO
) to show our update message.
As noted before, get_recent_posts()
has now moved to update.py
, as has
the wrapper()
function inside main()
.
Also in main()
, we no longer need api_key
. However, we still want to
know if it has been saved, so we can show a warning if not, so we still call
Workflow.get_password()
, but
without saving the result.
Most importantly, we’ve now expanded the update code to check if our cached
data is fresh with
Workflow.cached_data_fresh()
and to run the update.py
script via
background.run_in_background()
if not (Workflow.workflowfile()
returns the full path to a file in the workflow’s root directory).
Then we check if the update process is running via
background.is_running()
using the
name we assigned to the process (update
), and notify the user via Alfred’s
results if it is.
Finally, we call Workflow.cached_data()
with None
as the data-retrieval function (line 66) because we don’t want to
run an update from this script, blocking Alfred. As a consequence, it’s
possible that we’ll get back None
instead of a list of posts if there are
no cached data, so we check for this before trying to filter None
in line
80.
The fruits of your labour¶
Now let’s give it a spin. Open up Alfred and enter pbrecent
workflow:delcache
to clear the cached data. Then enter pbrecent
and start
typing a query. You should see the “Getting new posts from Pinboard” message
appear. Unfortunately, we won’t see any results at the moment because we just
deleted the cached data.
To see our background updater weave its magic, we can change the max_age
parameter
passed to Workflow.cached_data()
in update.py
on line 42 and to
Workflow.cached_data_fresh()
in pinboard.py
on line 69 to 60
. Open up Alfred, enter pbrecent
and
a couple of letters, then twiddle your thumbs for ~55 seconds. Type another letter
or two and you should see the “Getting new posts…” message and search
results. Cool, huh?
Now you’ve produced a technical marvel, it’s time to tell the world and enjoy
the well-earned plaudits. To build your workflow, open it up in Alfred’s
Preferences, right-click on the workflow’s name in the list on the left-hand
side, and choose Export…. This will save a .alfredworkflow
file that
you can share with other people. .alfredworkflow
files are just ZIP files
with a different extension. If you want to have a poke around inside one, just
change the extension to .zip
and extract it in the normal way.
And how do you share your Workflow with the world?
There’s a Share your Workflows thread on the official Alfred forum, but being a forum, it’s less than ideal as a directory for workflows. Also, you’d need to find your own place to host your workflow file (for which GitHub and Dropbox are both good, free choices).
It’s a good idea to sign up for the Alfred forum and post a thread for your workflow, so users can get in touch with you, but you might want to consider uploading it to Packal.org, a site specifically designed for hosting Alfred workflows. Your workflow will be much easier to find on that site than in the forum, and they’ll also host the workflow download for you.
Software, like plans, never survives contact with the enemy, err, user.
It’s likely that a bug or two will be found and some sweet improvements will be suggested, and so you’ll probably want to release a new and improved version of your workflow somewhere down the line.
Instead of requiring your users to regularly visit a forum thread or a website to check for an update, there are a couple of ways you can have your workflow (semi-)automatically updated.
The simplest way in terms of implementation is to upload your workflow to Packal.org. If you release a new version, any user who also uses the Packal Updater workflow will then be notified of the updated version. The disadvantage of this method is it only works if a user installs and uses the Packal Updater workflow.
A slightly more complex to implement method is to use Alfred-Workflow’s
built-in support for updates via GitHub releases. If you tell your
Workflow
object the name of your GitHub repo and
the installed workflow’s version number, Alfred-Workflow will automatically
check for a new version every day.
By default, Alfred-Workflow won’t inform the user of the new version or
update the workflow unless the user explicitly uses the workflow:update
“magic” argument, but you can check the
Workflow.update_available
attribute and inform the user of the availability of an update if it’s
True
.
See Self-updating in the User Manual for information on how to enable your workflow to update itself from GitHub.
[1] | argparse isn’t available in Python 2.6, so this workflow won’t
run on Snow Leopard (10.6). |
User Manual¶
If you know your way around Python and Alfred, here’s an overview of what Alfred-Workflow can do and how to do it.
User Manual¶
This section describes how to use the features of Alfred-Workflow.
If you’re new to writing workflows or coding in general, start with the Tutorial.
Tip
If you’re writing a workflow that uses data from the system
(e.g. from files/the filesystem or via command-line programs
called via subprocess
), please read Encoded strings and Unicode,
which describes how to handle data from sources other than
Alfred-Workflow’s libraries.
Supported OS X versions¶
Alfred 2 supports every version of OS X from 10.6 (Snow Leopard). Alfred-Workflow also supports the same versions, but there are a couple of things you have to watch out for because 10.6 has Python 2.6, while later versions have Python 2.7. As a result, if you want to maximise the compatibility of your workflow, you need to avoid using 2.7-only features in your code.
Here is the full list of new features in Python 2.7, but the most important things if you want your workflow to run on Snow Leopard are:
argparse
is not available in 2.6. Usegetopt
or include argparse in your workflow. Personally, I’m a big fan of docopt for parsing command-line arguments, butargparse
is better for certain use cases.- No dictionary views in 2.6.
- No set literals.
- No dictionary or set comprehensions.
- You must specify field numbers for
str.format()
, i.e.'{0}.{1}'.format(first, second)
not just'{}.{}'.format(first, second)
. - No
Counter
orOrderedDict
incollections
.
Python 2.6 is still included in later versions of OS X (up to and including
Yosemite), so run your Python scripts with /usr/bin/python2.6
in addition to
/usr/bin/python
(2.7) to make sure they will run on Snow Leopard.
Workflow setup and skeleton¶
Alfred-Workflow is aimed particularly at authors of so-called Script Filters. These are activated by a keyword in Alfred, receive user input and return results to Alfred.
To write a Script Filter with Alfred-Workflow, make sure your Script Filter
is set to use /bin/bash
as the Language, and select the
following (and only the following) Escaping options:
- Backquotes
- Double Quotes
- Dollars
- Backslashes
The Script field should contain the following:
/usr/bin/python yourscript.py "{query}"
where yourscript.py
is the name of your script [1].
Your workflow should start out like this. This enables Workflow
to capture any errors thrown by your scripts:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | #!/usr/bin/python
# encoding: utf-8
import sys
from workflow import Workflow
log = None
def main(wf):
# The Workflow instance will be passed to the function
# you call from `Workflow.run`
# Your imports here if you want to catch import errors
import somemodule
import anothermodule
# Get args from Workflow as normalized Unicode
args = wf.args
# Do stuff here ...
# Add an item to Alfred feedback
wf.add_item('Item title', 'Item subtitle')
# Send output to Alfred
wf.send_feedback()
if __name__ == '__main__':
wf = Workflow()
# Assign Workflow logger to a global variable for convenience
log = wf.logger
sys.exit(wf.run(main))
|
[1] | It’s better to specify /usr/bin/python over just python . This
ensures that the script will always be run with the system default
Python regardless of what PATH might be. |
Including 3rd party libraries¶
It’s a Very Bad Idea ™ to install (or ask users to install) 3rd-party libraries in the OS X system Python. Alfred-Workflow makes it easy to include them in your Workflow.
Simply create a lib
subdirectory under your Workflow’s root directory
and install your dependencies there. You can call the directory whatever you
want, but in the following explanation, I’ll assume you used lib
.
To install libraries in your dependencies directory, use:
pip install --target=path/to/my/workflow/lib python-lib-name
The path you pass as the --target
argument should be the path to
the directory under your Workflow’s root directory in which you want to install
your libraries. python-lib-name
should be the “pip name” (i.e. the name the
library has on PyPI) of the library you want
to install, e.g. requests
or feedparser
.
This name is usually, but not always, the same as the name you use with import
.
For example, to install Alfred-Workflow, you would run
pip install Alfred-Workflow
but use import workflow
to import it.
An example: You’re in a shell in Terminal.app in the Workflow’s root directory
and you’re using lib
as the directory for your Python libraries. You want to
install requests. You would run:
pip install --target=lib requests
This will install the requests
library into the lib
subdirectory of the
current working directory.
Then you instantiate Workflow
with the libraries
argument:
1 2 3 4 5 6 7 8 | from workflow import Workflow
def main(wf):
import requests # Imported from ./lib
if __name__ == '__main__':
wf = Workflow(libraries=['./lib'])
sys.exit(wf.run(main))
|
When using this feature you do not need to create an __init__.py
file in
the lib
subdirectory. Workflow(…, libraries=['./lib'])
and creating
./lib/__init__.py
are effectively equal alternatives.
Instead of using Workflow(…, libraries=['./lib'])
, you can add an empty
__init__.py
file to your lib
subdirectory and import the libraries
installed therein using:
from lib import requests
instead of simply:
import requests
Persistent data¶
Note
If you are writing your own files without using the
Workflow
APIs, please see
A note on Script Behaviour.
Alfred provides special data and cache directories for each Workflow (in
~/Library/Application Support
and ~/Library/Caches
respectively).
Workflow
provides the following
attributes/methods to make it easier to access these directories:
datadir
— The full path to your Workflow’s data directory.cachedir
— The full path to your Workflow’s cache directory.datafile(filename)
— The full path tofilename
under the data directory.cachefile(filename)
— The full path tofilename
under the cache directory.
The cache directory may be deleted during system maintenance, and is thus only
suitable for temporary data or data that is easily recreated.
Workflow
‘s cache methods reflect this,
and make it easy to replace cached data that are too old.
See Caching data for details of the data caching API.
The data directory is intended for more permanent, user-generated data, or data that cannot be otherwise easily recreated. See Storing data for details of the data storage API.
It is easy to specify a custom file format for your stored data
via the serializer
argument if you want your data to be readable by the user
or by other software. See Serialization of stored/cached data for more details.
Tip
There are also simliar methods related to the root directory of your
Workflow (where info.plist
and your code are):
workflowdir
— The full path to your Workflow’s root directory.workflowfile(filename)
— The full path tofilename
under your Workflow’s root directory.
These are used internally to implement “Magic” arguments, which provide assistance with debugging, updating and managing your workflow.
In addition, Workflow
also provides a
convenient interface for storing persistent settings with
Workflow.settings
.
See Settings and Keychain access for more
information on storing settings and sensitive data.
Caching data¶
Workflow
provides a few methods to simplify
caching data that is slow to retrieve or expensive to generate (e.g. downloaded
from a web API). These data are cached in your workflow’s cache directory (see
cachedir
). The main method is
Workflow.cached_data()
, which
takes a name under which the data should be cached, a callable to retrieve
the data if they aren’t in the cache (or are too old), and a maximum age in seconds
for the cached data:
1 2 3 4 5 6 7 | from workflow import web, Workflow
def get_data():
return web.get('https://example.com/api/stuff').json()
wf = Workflow()
data = wf.cached_data('stuff', get_data, max_age=600)
|
To retrieve data only if they are in the cache, call with None
as the
data-retrieval function (which is the default):
data = wf.cached_data('stuff', max_age=600)
Note
This will return None
if there are no corresponding data in the cache.
This is useful if you want to update your cache in the background, so it doesn’t impact your Workflow’s responsiveness in Alfred. (See the tutorial for an example of how to run an update script in the background.)
Tip
Passing max_age=0
will return the cached data regardless of age.
Clearing cached data¶
There is a convenience method for clearing a workflow’s cache directory.
clear_cache()
will by default delete all
the files contained in cachedir
. This is
the method called if you use the workflow:delcache
or workflow:reset
magic arguments.
You can selectively delete files from the cache by passing the optional
filter_func
argument to clear_cache()
.
This callable will be called with the filename (not path) of each file in the
workflow’s cache directory.
If filter_func
returns True
, the file will be deleted, otherwise it
will be left in the cache. For example, to delete all .zip
files in the
cache, use:
1 2 3 4 | def myfilter(filename):
return filename.endswith('.zip')
wf.clear_cache(myfilter)
|
or more simply:
1 | wf.clear_cache(lambda f: f.endswith('.zip'))
|
Storing data¶
Workflow
provides two methods to store
and retrieve permanent data:
store_data()
and
stored_data()
.
These data are stored in your workflow’s data directory
(see datadir
).
1 2 3 4 5 6 | from workflow import Workflow
wf = Workflow()
wf.store_data('name', data)
# data will be `None` if there is nothing stored under `name`
data = wf.stored_data('name')
|
These methods do not support the data expiry features of the cached data methods, but you can specify your own serializer for each datastore, making it simple to store data in, e.g., JSON or YAML format.
You should use these methods (and not the data caching ones) if the data you are saving should not be deleted as part of system maintenance.
If you want to specify your own file format/serializer, please see Serialization of stored/cached data for details.
Clearing stored data¶
As with cached data, there is a convenience method for deleting all the files
stored in your workflow’s datadir
.
By default, clear_data()
will delete all the
files stored in datadir
. It is used by the
workflow:deldata
and workflow:reset
magic arguments.
It is possible to selectively delete files contained in the data directory by
supplying the optional filter_func
callable. Please see Clearing cached data
for details on how filter_func
works.
Settings¶
Workflow.settings
is a subclass
of dict
that automatically saves its contents to the settings.json
file in your Workflow’s data directory when it is changed.
Settings
can be used just like a normal dict
with the caveat that all keys and values must be serializable to JSON.
Warning
A Settings
instance can only automatically
recognise when you directly alter the values of its own keys:
1 2 3 | wf = Workflow()
wf.settings['key'] = {'key2': 'value'} # will be automatically saved
wf.settings['key']['key2'] = 'value2' # will *not* be automatically saved
|
If you’ve altered a data structure stored within your workflow’s
Workflow.settings
, you need to
explicitly call Workflow.settings.save()
.
If you need to store arbitrary data, you can use the cached data API.
If you need to store data securely (such as passwords and API keys),
Workflow
also provides simple access to
the OS X Keychain.
Keychain access¶
Methods Workflow.save_password(account, password)
,
Workflow.get_password(account)
and Workflow.delete_password(account)
allow access to the Keychain. They may raise
PasswordNotFound
if no password is set for
the given account
or KeychainError
if
there is a problem accessing the Keychain. Passwords are stored in the user’s
default Keychain. By default, the Workflow’s Bundle ID will be used as the
service name, but this can be overridden by passing the service
argument
to the above methods.
Example usage:
1 2 3 4 5 6 7 8 9 10 11 12 | from workflow import Workflow
wf = Workflow()
wf.save_password('hotmail-password', 'password1lolz')
password = wf.get_password('hotmail-password')
wf.delete_password('hotmail-password')
# raises PasswordNotFound exception
password = wf.get_password('hotmail-password')
|
See the relevant part of the tutorial for a full example.
A note on Script Behaviour¶
In version 2.7, Alfred introduced a new Script Behaviour setting for Script Filters. This allows you (among other things) to specify that a running script should be killed if the user continues typing in Alfred.
If you enable this setting, it’s possible that Alfred will terminate your
script in the middle of some critical code (e.g. writing a file).
Alfred-Workflow provides the uninterruptible
decorator to prevent your script being terminated in the middle of a
critical function.
Any function wrapped with uninterruptible
will
be executed fully, and any signal caught during its execution will be
handled when your function completes.
For example:
1 2 3 4 5 | from workflow.workflow import uninterruptible
@uninterruptible
def critical_function():
# Your critical code here
|
If you only want to write to a file, you can use the
atomic_writer
context manager. This does not
guarantee that the file will be written, but does guarantee that it will
only be written if the write succeeds (the data is first written to a temporary
file).
Searching/filtering data¶
Workflow.filter()
provides an
Alfred-like search algorithm for filtering your workflow’s data. By default,
Workflow.filter()
will try to match
your search query via CamelCase, substring, initials and all characters,
applying different weightings to the various kind of matches (see
Workflow.filter()
for a detailed
description of the algorithm and match flags).
Warning
Check query
before calling
Workflow.filter()
. query
may not be empty or contain only whitespace. This will raise a
ValueError
.
Workflow.filter()
is not a
“little sister” of a Script Filter and won’t return a list of all results
if query
is empty. query
is not an optional argument and trying
to filter data against a meaningless query is treated as an error.
Workflow.filter()
won’t
complain if items
is an empty list, but it will raise a
ValueError
if query
is empty.
Best practice is to do the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | def main(wf):
query = None # Ensure `query` is initialised
# Set `query` if a value was passed (it may be an empty string)
if len(wf.args):
query = wf.args[0]
items = load_my_items_from_somewhere() # Load data from blah
if query: # Only call `filter()` if there's a `query`
items = wf.filter(query, items)
# Show error if there are no results. Otherwise, Alfred will show
# its fallback searches (i.e. "Search Google for 'XYZ'")
if not items:
wf.add_item('No items', icon=ICON_WARNING)
# Generate list of results. If `items` is an empty list,
# nothing will happen
for item in items:
wf.add_item(item['title'], ...)
wf.send_feedback() # Send results to Alfred via STDOUT
|
This is by no means essential (wf.args[0]
will always be set if the script
is called from Alfred via python thescript.py "{query}"
), but it won’t
work from the command line unless called with an empty string
(python thescript.py ""
), and it’s good to be aware of when you’re
dealing with unset/empty variables.
Note
By default, Workflow.filter()
will match and return anything that contains all the characters in
query
in the same order, regardless of case. Not only can this lead to
unacceptable performance when working with thousands of items, but it’s
also very likely that you’ll want to set the standard a little higher.
See Restricting results for info on how to do that.
To use Workflow.filter()
, pass it
a query, a list of items to filter and sort, and if your list contains items
other than strings, a key
function that generates a string search key for
each item:
1 2 3 4 5 6 7 | from workflow import Workflow
names = ['Bob Smith', 'Carrie Jones', 'Harry Johnson', 'Sam Butterkeks']
wf = Workflow()
hits = wf.filter('bs', names)
|
Which returns:
['Bob Smith', 'Sam Butterkeks']
(bs
are Bob Smith’s initials and Butterkeks
contains both letters in that order.)
If your data are not strings:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | from workflow import Workflow
books = [
{'title': 'A damn fine afternoon', 'author': 'Bob Smith'},
{'title': 'My splendid adventure', 'author': 'Carrie Jones'},
{'title': 'Bollards and other street treasures', 'author': 'Harry Johnson'},
{'title': 'The horrors of Tuesdays', 'author': 'Sam Butterkeks'}
]
def key_for_book(book):
return '{} {}'.format(book['title'], book['author'])
wf = Workflow()
hits = wf.filter('bot', books, key_for_book)
|
Which returns:
[{'author': 'Harry Johnson', 'title': 'Bollards and other street treasures'},
{'author': 'Bob Smith', 'title': 'A damn fine afternoon'}]
Restricting results¶
Chances are, you would not want bot
to match Bob Smith A damn fine afternoon
at all, or indeed any of the other books. Indeed, they have very low scores:
hits = wf.filter('bot', books, key_for_book, include_score=True)
produces:
[({'author': 'Bob Smith', 'title': 'A damn fine afternoon'},
11.11111111111111,
64),
({'author': 'Harry Johnson', 'title': 'Bollards and other street treasures'},
3.3333333333333335,
64),
({'author': 'Sam Butterkeks', 'title': 'The horrors of Tuesdays'}, 3.125, 64)]
(64
is the rule that matched, MATCH_ALLCHARS
,
which matches if all the characters in query
appear in order in the search
key, regardless of case).
Tip
rules
in filter()
results are
returned as integers. To see the name of the corresponding rule, see
Matching rules.
If we filter {'author': 'Brienne of Tarth', 'title': 'How to beat up men'}
and
{'author': 'Zoltar', 'title': 'Battle of the Planets'}
, which we probably
would want to match bot
, we get:
[({'author': 'Zoltar', 'title': 'Battle of the Planets'}, 98.0, 8),
({'author': 'Brienne of Tarth', 'title': 'How to beat up men'}, 90.0, 16)]
(The ranking would be reversed if key_for_book()
returned author title
instead of title author
.)
So in all likelihood, you’ll want to pass a min_score
argument to
Workflow.filter()
:
hits = wf.filter('bot', books, key_for_book, min_score=20)
and/or exclude some of the matching rules:
1 2 3 4 5 | from workflow import Workflow, MATCH_ALL, MATCH_ALLCHARS
# [...]
hits = wf.filter('bot', books, key_for_book, match_on=MATCH_ALL ^ MATCH_ALLCHARS)
|
You can set match rules using bitwise operators, so |
to combine them or
^
to remove them from MATCH_ALL
:
1 2 3 4 5 | # match only CamelCase and initials
match_on=MATCH_CAPITALS | MATCH_INITIALS
# match everything but all-characters-in-item and substring
match_on=MATCH_ALL ^ MATCH_ALLCHARS ^ MATCH_SUBSTRING
|
Warning
MATCH_ALLCHARS
is particularly slow and provides the
worst matches. You should consider excluding it, especially if you’re calling
Workflow.filter()
with more than a
few hundred items or expect multi-word queries.
Diacritic folding¶
By default, Workflow.filter()
will fold non-ASCII characters to approximate ASCII equivalents (e.g. é >
e, ü > u) if query
contains only ASCII characters. This behaviour
can be turned off by passing fold_diacritics=False
to
Workflow.filter()
.
Note
To keep the library small, only a subset of European languages are supported. The Unidecode library should be used for comprehensive support of non-European alphabets.
Users may override a Workflow’s default settings via workflow:folding…
magic arguments.
“Smart” punctuation¶
The default diacritic folding only alters letters, not punctuation. If your
workflow also works with text that contains so-called “smart” (i.e. curly)
quotes or n- and m-dashes, you can use the Workflow.dumbify_punctuation()
method to replace smart quotes and dashes with normal quotes and hyphens
respectively.
Matching rules¶
Here are the MATCH_*
constants from workflow
and their numeric values.
For a detailed description of the rules see
Workflow.filter()
.
Name | Value |
---|---|
MATCH_STARTSWITH |
1 |
MATCH_CAPITALS |
2 |
MATCH_ATOM |
4 |
MATCH_INITIALS_STARTSWITH |
8 |
MATCH_INITIALS_CONTAIN |
16 |
MATCH_INITIALS |
24 |
MATCH_SUBSTRING |
32 |
MATCH_ALLCHARS |
64 |
MATCH_ALL |
127 |
Retrieving data from the web¶
The unit tests in the source repository contain examples of pretty
much everything workflow.web
can do:
- GET and POST variables
- Retrieve and decode JSON
- Post JSON
- Post forms
- Automatically handle encoding for HTML and XML
- Basic authentication
- File uploads with forms and without forms
- Download large files
- Variable timeouts
- Ignore redirects
See the API documentation for more information.
Background processes¶
Many workflows provide a convenient interface to applications and/or web services.
For performance reasons, it’s common for workflows to cache data locally, but updating this cache typically takes a few seconds, making your workflow unresponsive while an update is occurring, which is very un-Alfred-like.
To avoid such delays, Alfred-Workflow provides the background
module to allow you to easily run scripts in the background.
There are two functions, run_in_background()
and
is_running()
, that provide the interface. The
processes started are full daemon processes, so you can start real servers
as easily as simple scripts.
Here’s an example of a common usage pattern (updating cached data in the background). What we’re doing is:
- Checking the age of the cached data and running the update script via
run_in_background()
if the cached data are too old or don’t exist. - (Optionally) informing the user that data are being updated.
- Loading the cached data regardless of age.
- Displaying the cached data (if any).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | from workflow import Workflow, ICON_INFO
from workflow.background import run_in_background, is_running
def main(wf):
# Is cache over 1 hour old or non-existent?
if not wf.cached_data_fresh('exchange-rates', 3600):
run_in_background('update',
['/usr/bin/python',
wf.workflowfile('update_exchange_rates.py')])
# Add a notification if the script is running
if is_running('update'):
wf.add_item('Updating exchange rates...', icon=ICON_INFO)
# max_age=0 will load any cached data regardless of age
exchange_rates = wf.cached_data('exchage-rates', max_age=0)
# Display (possibly stale) cache data
if exchange_rates:
for rate in exchange_rates:
wf.add_item(rate)
# Send results to Alfred
wf.send_feedback()
if __name__ == '__main__':
wf = Workflow()
wf.run(main)
|
For a working example, see Part 2 of the Tutorial or the source code of my Git Repos workflow, which is a bit smarter about showing the user update information.
Self-updating¶
New in version 1.9.
Add self-updating capabilities to your workflow. It regularly (every day by default) fetches the latest releases from the specified GitHub repository and then asks the user if they want to update the workflow if a newer version is available.
Users can turn off automatic checks for updates with the workflow:noautoupdate
magic argument and back on again with
workflow:autoupdate
.
Danger
If you are not careful, you might accidentally overwrite a local version of the workflow you’re working on and lose all your changes! It’s a good idea to make sure you increase the version number before you start making any changes.
Currently, only updates from GitHub releases are supported.
GitHub releases¶
For your workflow to be able to recognise and download newer versions, the
version
value you pass to Workflow
should
be one of the versions (i.e. tags) in the corresponding GitHub repo’s
releases list. See Version numbers for more information.
There must be one (and only one) .alfredworkflow
binary attached to a
release otherwise the release will be ignored. This is the file that will be
downloaded and installed via Alfred’s default installation mechanism.
Important
Releases marked as pre-release
on GitHub will be ignored.
Configuration¶
To use self-updating, you must pass a dict
as the update_settings
argument to Workflow
. It must have the key/value
pair github_slug
, which is your username and the name of the
workflow’s repo in the format username/reponame
. The version of the currently
installed workflow must also be specified. You can do this in the
update_settings
dict or in a version
file in the root of your workflow
(next to info.plist
), e.g.:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | from workflow import Workflow
__version__ = '1.1'
...
wf = Workflow(..., update_settings={
# Your username and the workflow's repo's name
'github_slug': 'username/reponame',
# The version (i.e. release/tag) of the installed workflow
# If a `version` file exists in the root of your workflow,
# this key may be omitted
'version': __version__,
# Optional number of days between checks for updates
'frequency': 7
}, ...)
...
if wf.update_available:
# Download new version and tell Alfred to install it
wf.start_update()
|
Or alternatively, create a version
file in the root directory or your
workflow alongside info.plist
:
Your Workflow/
icon.png
info.plist
yourscript.py
version
workflow/
...
...
The version
file should be plain text with no file extension and contain
nothing but the version string, e.g.:
1.2.5
Using a version
file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | from workflow import Workflow
...
wf = Workflow(..., update_settings={
# Your username and the workflow's repo's name
'github_slug': 'username/reponame',
# Optional number of days between checks for updates
'frequency': 7
}, ...)
...
if wf.update_available:
# Download new version and tell Alfred to install it
wf.start_update()
|
You must use semantic version numbering. Please see Versioning and migration for detailed information on the required version number format and associated features.
Note
Alfred-Workflow will automatically check in the background if a newer version of your workflow is available, but will not automatically inform the user nor download and install the update.
Usage¶
You can just leave it up to the user to check update status and install new
versions manually using the workflow:update
magic argument in a Script Filter, or you could roll
your own update handling using
Workflow.update_available
and Workflow.start_update()
to check for and install newer versions respectively.
The simplest way, however, is usually to add an update notification to the top
of your Script Filter’s results that triggers Alfred-Workflow’s
workflow:update
magic argument:
1 2 3 4 5 6 7 8 9 10 11 | wf = Workflow(...update_settings={...})
if wf.update_available:
# Add a notification to top of Script Filter results
wf.add_item('New version available',
'Action this item to install the update',
autocomplete='workflow:update',
icon=ICON_INFO)
# Show other results here
...
|
By adding an Item
with valid=False
and
autocomplete='workflow:update'
, Alfred’s query will be expanded to
workflow:update
when a user actions the item, which is a
magic argument that will in turn prompt
Alfred-Workflow to download and install the update.
Under the hood¶
The check_update()
method is called
automatically when you call Workflow.run
If sufficient time has elapsed since the last check (1 day by default), it
starts a background process that checks for new releases. You can alter the
update interval with the optional frequency
key in update_settings
dict
(see the example above).
Workflow.update_available
is True
if an update is available, and False
otherwise.
Workflow.start_update()
returns False
if no update is available, or if one is, it will return
True
, then download the newer version and tell Alfred to install it in
the background.
If you want more control over the update mechanism, you can use
update.check_update()
directly.
It caches information on the latest available release under the cache key
__workflow_update_status
, which you can access via
Workflow.cached_data()
.
Version numbers¶
Please see Versioning and migration for detailed information on the required version number format and associated features.
Versioning and migration¶
New in version 1.10.
If you intend to distribute your workflow, it’s a good idea to use version numbers. It allows users to see if they’re using an out-of-date version, and more importantly, it allows you to know which version a user has when they ask you for support or to fix a bug (that you may already have fixed).
If your workflow has a version number set (see Setting a version number), the version
will be logged every time the workflow is run to help with debugging, and can
also be displayed using the workflow:version
magic argument.
If you wish to use the self-updating feature, your workflow must have a version number.
Having a version number also enables the first run/migration functionality. See First run/migration below for details.
Setting a version number¶
There are two ways to set a version number. The simplest and best is to
create a version
file in the root directory of your workflow (next to
info.plist
) that contains the version number:
Your Workflow/
icon.png
info.plist
yourscript.py
version
workflow/
...
You may also specify the version number using the version
key in the
update_settings
dictionary passed to Workflow
,
though you can only use this method if your workflow supports self-updates
from GitHub.
Using a version
file is preferable as then you only need to maintain the
version number in one place.
Version numbers¶
In version 1.10 and above, Alfred-Workflow requires Semantic versioning,
which is the format GitHub also expects. Alfred-Workflow deviates from the
semantic versioning standard slightly, most notably in that you don’t have to
specify a minor or patch version, i.e. 1.0
is fine, as is simply 1
(the standard requires these to both be written 1.0.0
). See
Semantic versioning for more details on version formatting.
The de-facto way to tag releases on GitHub is use a semantic version number
preceded by v
, e.g. v1.0
, v2.3.1
etc., whereas the de-facto way
to version Python libraries is to do the same, but without the preceding v
,
e.g. 1.0
, 2.3.1
etc.
As a result, Alfred-Workflow will strip a preceding v
from both local
and remote versions (i.e. you can specify 1.0
or v1.0
in either or both
of your Python code and GitHub releases).
When this is done, if the latest GitHub version is higher than the local version, Alfred-Workflow will consider the remote version to be an update.
Thus, calling Workflow
with
update_settings={'version': '1.2', ...}
or
update_settings={'version': 'v1.2', ...}
will be considered the same
version as the GitHub release tag v1.2
or 1.2
(or indeed 1.2.0
).
Semantic versioning¶
Semantic versioning is a standard for formatting software version numbers.
Essentially, a version number must consist of a major version number, a minor
version number and a patch version number separated by dots, e.g. 1.0.1
,
2.10.3
etc. You should increase the patch version when you fix bugs, the
minor version when you add new features and the major version if you change
the API.
You may also add additional pre-release version info to the end of the version
number, preceded by a hyphen (-
), e.g. 2.0.0-rc.1
or 2.0.0-beta
.
Alfred-Workflow differs from the standard in that you aren’t required to
specify a minor or patch version, i.e. 1.0
is fine, as is 1
(and both
are considered equal and also equal to 1.0.0
).
This change was made as relatively few workflow authors use patch versions.
See the semantic versioning website for full details of the standard and the rationale behind it.
First run/migration¶
New in version 1.10.
If your workflow uses version numbers, you can
use the Workflow.first_run
and Workflow.last_version_run
attributes to bootstrap newly-installed workflows or to migrate data from
an older version.
first_run
will be True
if this version
of the workflow has never run before. If an older version has previously run,
last_version_run
will contain the version
of that workflow.
Both last_version_run
and
version
are Version
instances (or None
) to make comparison easy. Be sure to check for None
before comparing them: comparing Version
and None
will raise a ValueError
.
last_version_run
is set to the value of
the currently running workflow if it runs successfully without raising an
exception.
Important
last_version_run
will only be set
automatically if you run your workflow via
Workflow.run()
. This is because
Workflow
is often used as a utility class by
other workflow scripts, and you don’t want your background update script
to confuse things by setting the wrong version.
If you want to set last_version_run
yourself, use set_last_version()
.
System icons¶
The workflow
module provides access to a number of default
OS X icons via ICON_*
constants for use when generating Alfred feedback:
1 2 3 4 5 | from workflow import Workflow, ICON_INFO
wf = Workflow()
wf.add_item('For your information', icon=ICON_INFO)
wf.send_feedback()
|
List of icons¶
These are all the icons accessible in workflow
. They (and
more) can be found in
/System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/
.
Name | Preview |
---|---|
ICON_ACCOUNT |
![]() |
ICON_BURN |
![]() |
ICON_CLOCK |
![]() |
ICON_COLOR |
![]() |
ICON_COLOUR |
![]() |
ICON_EJECT |
![]() |
ICON_ERROR |
![]() |
ICON_FAVORITE |
![]() |
ICON_FAVOURITE |
![]() |
ICON_GROUP |
![]() |
ICON_HELP |
![]() |
ICON_HOME |
![]() |
ICON_INFO |
![]() |
ICON_NETWORK |
![]() |
ICON_NOTE |
![]() |
ICON_SETTINGS |
![]() |
ICON_SWIRL |
![]() |
ICON_SWITCH |
![]() |
ICON_SYNC |
![]() |
ICON_TRASH |
![]() |
ICON_USER |
![]() |
ICON_WARNING |
![]() |
ICON_WEB |
![]() |
If you’d like other standard OS X icons to be added, please add an issue on GitHub.
“Magic” arguments¶
If your Script Filter (or script) accepts a query (or command line arguments),
you can pass it so-called magic arguments that instruct
Workflow
to perform certain actions, such as
opening the log file or clearing the cache/settings.
These can be a big help while developing and debugging and especially when debugging problems your Workflow’s users may be having.
The Workflow.run()
method
(which you should “wrap” your Workflow’s entry functions in) will catch any
raised exceptions, log them and display them in Alfred. You can call your
Workflow with workflow:openlog
as an Alfred query/command line argument
and Workflow
will open the Workflow’s log file
in the default app (usually Console.app).
This makes it easy for you to get at the log file and data and cache directories
(hidden away in ~/Library
), and for your users to send you their logs
for debugging.
Note
Magic arguments will only work with scripts that accept arguments and use
the args
property (where magic
arguments are parsed).
Workflow
supports the following magic arguments by default:
workflow:magic
— List available magic arguments.workflow:help
— Open workflow’s help URL in default web browser. This URL is specified in thehelp_url
argument toWorkflow
.workflow:version
— Display the installed version of the workflow (if one is set).workflow:delcache
— Delete the Workflow’s cache.workflow:deldata
— Delete the Workflow’s saved data.workflow:delsettings
— Delete the Workflow’s settings file (which contains the data stored usingWorkflow.settings
).workflow:foldingdefault
— Reset diacritic folding to workflow defaultworkflow:foldingoff
— Never fold diacritics in search keysworkflow:foldingon
— Force diacritic folding in search keys (e.g. convert ü to ue)workflow:opencache
— Open the Workflow’s cache directory.workflow:opendata
— Open the Workflow’s data directory.workflow:openlog
— Open the Workflow’s log file in the default app.workflow:openterm
— Open a Terminal window in the Workflow’s root directory.workflow:openworkflow
— Open the Workflow’s root directory (whereinfo.plist
is).workflow:reset
— Delete the Workflow’s settings, cache and saved data.workflow:update
— Check for a newer version of the workflow using GitHub releases and install the newer version if one is available.workflow:noautoupdate
— Turn off automatic checks for updates.workflow:autoupdate
— Turn automatic checks for updates on.
The three workflow:folding…
settings allow users to override the diacritic
folding set by a workflow’s author. This may be useful if the author’s choice
does not correspond with a user’s usage pattern.
You can turn off magic arguments by passing capture_args=False
to
Workflow
on instantiation, or call the corresponding methods of Workflow
directly,
perhaps assigning your own keywords within your Workflow:
open_help()
open_log()
open_cachedir()
open_datadir()
open_workflowdir()
open_terminal()
clear_cache()
clear_data()
clear_settings()
reset()
(a shortcut to call the three previousclear_*
methods)check_update()
start_update()
Customising magic arguments¶
The default prefix for magic arguments (workflow:
) is contained in the
magic_prefix
attribute of
Workflow
. If you want to change it to, say,
wf:
(which will become the default in v2 of Alfred-Workflow), simply
reassign it:
wf.magic_prefix = 'wf:'
The magic arguments are defined in the Workflow.magic_arguments
dictionary.
The dictionary keys are the keywords for the arguments (without the
prefix) and the values are functions that should be called when the magic
argument is entered. You can show a message in Alfred by returning a
unicode
string from the function.
To add a new magic argument that opens the workflow’s settings file, you could do:
1 2 3 4 5 6 7 8 | wf = Workflow()
wf.magic_prefix = 'wf:' # Change prefix to `wf:`
def opensettings():
subprocess.call(['open', wf.settings_path])
return 'Opening workflow settings...'
wf.magic_arguments['settings'] = opensettings
|
Now entering wf:settings
as your workflow’s query in Alfred will
open settings.json
in the default application.
Serialization of stored/cached data¶
By default, both cache and data files (created using the APIs described in
Persistent data) are cached using cPickle
. This provides
a great compromise in terms of speed and the ability to store arbitrary objects.
When changing or specifying a serializer, use the name under which the serializer is registered with the workflow.manager object.
Warning
When it comes to cache data, it is strongly recommended to stick with the
default. cPickle
is very fast and fully supports standard Python
data structures (dict
, list
, tuple
, set
etc.).
If you really must customise the cache data format, you can change the
default cache serialization format to pickle
thus:
1 2 | wf = Workflow()
wf.cache_serializer = 'pickle'
|
Unlike the stored data API, the cached data API can’t determine the format of the cached data. If you change the serializer without clearing the cache, errors will probably result as the serializer tries to load data in a foreign format.
In the case of stored data, you are free to specify either a global default serializer or one for each individual datastore:
1 2 3 4 5 6 | wf = Workflow()
# Use `pickle` as the global default serializer
wf.data_serializer = 'pickle'
# Use the JSON serializer only for these data
wf.store_data('name', data, serializer='json')
|
This is primarily so you can create files that are human-readable or useable by other software. The generated JSON is formatted to make it readable.
The stored_data()
method can
automatically determine the serialization of the stored data (based on the file
extension, which is the same as the name the serializer is registered under),
provided the corresponding serializer is registered. If it isn’t, a
ValueError
will be raised.
Built-in serializers¶
There are 3 built-in, pre-configured serializers:
cpickle
— the default serializer for both cached and stored data, with very good support for native Python data types;pickle
— a more flexible, but much slower alternative tocpickle
; andjson
— a very common data format, but with limited support for native Python data types.
See the built-in cPickle
, pickle
and json
libraries for
more information on the serialization formats.
Managing serializers¶
You can add your own serializer, or replace the built-in ones, using the
configured instance of SerializerManager
at
workflow.manager
, e.g. from workflow import manager
.
A serializer
object must have load()
and dump()
methods that work
the same way as in the built-in json
and pickle
libraries, i.e.:
1 2 3 4 | # Reading
obj = serializer.load(open('filename', 'rb'))
# Writing
serializer.dump(obj, open('filename', 'wb'))
|
To register a new serializer, call the
register()
method of the
workflow.manager
object with the name of the serializer and the object
that performs serialization:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | from workflow import Workflow, manager
class MySerializer(object):
@classmethod
def load(cls, file_obj):
# load data from file_obj
@classmethod
def dump(cls, obj, file_obj):
# serialize obj to file_obj
manager.register('myformat', MySerializer())
|
Note
The name you specify for your serializer will be the file extension of the stored files.
Serializer interface¶
A serializer must conform to this interface (like json
and
pickle
):
1 2 | serializer.load(file_obj)
serializer.dump(obj, file_obj)
|
See the Serialization section of the API documentation for more information.
Encoded strings and Unicode¶
This is a brief guide to Unicode and encoded strings aimed at Alfred-Workflow users (and Python coders in general) who are unfamiliar with them.
Encoding errors are by far the most common group of bugs in Python workflows in the wild (they’re so easy for developers to miss).
This guide should give you an idea of what Unicode and encoded strings are, and why and how you as a workflow developer should deal with them.
Important
String encoding is something Python 2 will let you largely ignore. It will happily let you mix strings of different encodings without complaint (although the result will most likely be garbage) and if you mix Unicode and encoded strings, Python will silently “promote” the encoded string to Unicode by decoding it as ASCII. If your workflow only ever uses ASCII, you need never worry about Unicode or string encoding.
But make no mistake: if you distribute your workflow, somebody will feed your workflow non-ASCII text. Although Alfred is English-only, it’s not used exclusively by monolingual English speakers. What’s more, standard English-language characters, like £ or €, are also non-ASCII.
If you intend to distribute your workflow, you should make sure it works with non-ASCII text.
If you don’t, I guarantee a text-encoding issue will be one of the first bug reports.
TL;DR¶
Best practice in Python programs is to use Unicode internally and decode all text input and encode all text output at IO boundaries (i.e. right where it enters/leaves your program). On OS X, UTF-8 is almost always the right encoding.
Be sure to decode all input from and encode all output to the system
(in particular via subprocess
and when passing a {query}
to a
subsequent workflow action).
If you don’t, your workflow will break or, at best, not work as intended when someone feeds it non-ASCII text.
Alfred-Workflow will almost always give you Unicode strings. (The exception is
web.Response
, whose
text()
method will return an encoded string
if it couldn’t determine the encoding.)
Use Workflow.decode()
to decode input and
u'My unicode string'.encode('utf-8')
to encode output, e.g.:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 | #!/usr/bin/python
# encoding: utf-8
# Because we want to work with Unicode, it's simpler if we make
# literal strings in source code Unicode strings by default, so
# we set `encoding: utf-8` at the very top of the script to tell Python
# that this source file is UTF-8 and import `unicode_literals` before any
# code.
#
# See Tip further down the page for more info
from __future__ import unicode_literals, print_function
import subprocess
from workflow import Workflow
wf = Workflow()
# wf.args decodes and normalizes sys.argv for you
query = wf.args[0]
# `subprocess` returns encoded strings (UTF-8 in this case)
# Note: the arguments are prefixed with `b` because of unicode_literals
# You should pass encoded strings to `subprocess`. It doesn't much
# matter in this case, as everything can be encoded to ASCII, but if you're
# passing in, say, a user-supplied query, be sure to encode it to UTF-8
output = subprocess.check_output([b'mdfind', b'-onlyin',
os.getenv('HOME'),
b'kind:folder date:today'])
# Convert to Unicode and NFC-normalize
output = wf.decode(output)
# Split the output into individual filepaths
paths = [s.strip() for s in output.split('\n') if s.strip()]
# Filter paths by query
paths = wf.filter(query, paths,
# We just want to filter on filenames, not the whole path
key=lambda s: os.path.basename(s),
min_score=30)
if paths:
# For demonstration purposes, pass the first result as `{query}`
# to the next workflow Action.
print(paths[0].encode('utf-8'))
|
String types¶
In Python, there are two different kind of strings: Unicode and encoded strings.
Unicode strings only exist within running programs (Unicode is a concept rather than a concrete implementation), while encoded strings are binary data that are encoded according to some scheme that maps characters to a specific binary representation (e.g. UTF-8 or ASCII).
In Python, these have the types unicode
and str
respectively.
As noted, Unicode strings only exist within a running program. Any text stored on disk, passed into or out of a program or transmitted over a network must be encoded. On OS X, almost all text (e.g. filenames, most text output from programs) is encoded with UTF-8.
In order for your program to work properly, it’s important to ensure that all text is of the same type/encoding:
>>> u = u'Fahrvergnügen' # This is a Unicode string
>>> enc1 = u.encode('utf-8') # OS X default encoding
>>> enc2 = u.encode('latin-1') # Older standard German encoding
>>> enc1 == enc2
False
>>> u == enc1
UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal
False
>>> unicode(enc1, 'utf-8') == unicode(enc2, 'latin-1')
True
The correct way to do this in Python is to decode all text input to Unicode as soon as it enters your program. In particular, this means:
- Command-line arguments (via
sys.argv
) - Environmental variables (via
os.environ
) - The contents of text files (via
open()
) - Data retrieved from the web (via
urllib.urlopen()
) - The output of subprocesses (via
subprocess.check_output()
orsubprocess.Popen
etc.) - Filepaths (via
os.listdir()
etc.). Sometimes. Basically, if you pass a Unicode string to a filesystem function, you’ll get Unicode back. If you pass an encoded string, you’ll get an encoded (UTF-8) string back.
Alfred-Workflow uses Unicode throughout, and any command-line arguments
(Workflow.args
), environmental variables (Workflow.alfred_env
),
or data from the web (e.g. web.Response.text
)
will be decoded to Unicode for you.
As a result of this, it’s important that you also decode any text your workflow pulls in from other sources. When you combine Unicode and encoded strings in Python 2, Python will “promote” the encoded string to Unicode by attempting to decode it as ASCII. In many cases this will work, but if the encoded string contains characters that aren’t in ASCII (e.g. £ or ü or —), your workflow will die in flames.
Tip
Always test your workflow with non-ASCII input to flush out any accidental mixing of Unicode and encoded strings.
Workflow
provides the convenience method Workflow.decode()
for working with Unicode and encoded strings. You can pass it Unicode or encoded
strings and it will return normalized Unicode. You can specify the encoding
and normalization form with the input_encoding
and normalization
arguments to Workflow
or with the encoding
and
normalization
arguments to Workflow.decode()
. Generally,
you shouldn’t need to change the default encoding of UTF-8, which is what
OS X uses, but you may need to alter the normalization depending on where
your workflow gets its data from.
Tip
To save yourself from having to prefix every string in your source code
with u
to mark it as a Unicode string, add
from __future__ import unicode_literals
at the top of your Python
scripts. This makes all unprefixed strings Unicode by default (use b''
to create an encoded string). Add #encoding: utf-8
to the top of your
source files to tell Python that the source code is UTF-8.
Encoded strings by default:
1 2 3 4 | # encoding: utf-8
ustr = u'This is a Unicode string'
bstr = 'This is a UTF-8 encoded string'
|
Unicode by default:
1 2 3 4 5 | # encoding: utf-8
from __future__ import unicode_literals
ustr = 'This is a Unicode string'
bstr = b'This is a UTF-8 encoded string'
|
Normalization¶
Unicode provides multiple ways to represent the same character. Normalization is the process of ensuring that all instances of a given Unicode character are represented in the same way.
TL;DR¶
Normalize all input.
Nitty-Gritty¶
If your workflow is based around comparing a user query
to data from the
system (filepaths, output of command-line programs), you should instantiate
Workflow
with the normalization='NFD'
argument.
If your workflow uses data from the Web (via native Python libraries, including
web
), you probably don’t need to do anything
(everything will be NFC-normalized).
If you’re mixing both kinds of data, the simplest solution is probably to run
all data from the system through Workflow.decode()
to ensure it is
normalized in the same way as data from the Web.
Why does normalization matter?¶
In Unicode, accented characters can be represented in different ways, e.g. ü
can be represented as ü
or as u+¨
. Unfortunately, Python doesn’t ensure
that all Unicode strings are normalized to use the same representations when
comparing them.
Therefore, if you’re comparing a string containing ü
that came from a
JSON file (which will typically be NFC-normalized) with an ostensibly identical
string that came from OS X’s filesystem (which is NFD-normalized), Python won’t
recognise them as being the same:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | >>> from unicodedata import normalize
>>> from glob import glob
>>> name = u'München.txt' # German for 'Munich'. NFC-normalized, as it's Python source code
>>> print(repr(name))
u'M\xfcnchen.txt'
>>> open(name, 'wb').write('') # Create an empty text file called `München.txt`
>>> for filename in glob(u'*.txt'):
... if filename == name:
... print(u'Match : {0} ({0!r}) == {1} ({1!r})'.format(filename, name))
... else:
... print(u'No match : {0} ({0!r}) != {1} ({1!r})'.format(filename, name))
...
# The filename has been NFD-normalized by the filesystem
No match : München.txt (u'Mu\u0308nchen.txt') != München.txt (u'M\xfcnchen.txt')
>>> for filename in glob(u'*.txt'):
... filename = normalize('NFC', filename) # Ensure the same normalization
... if filename == name:
... print(u'Match : {0} ({0!r}) == {1} ({1!r})'.format(filename, name))
... else:
... print(u'No match : {0} ({0!r}) != {1} ({1!r})'.format(filename, name))
...
Match : München.txt (u'M\xfcnchen.txt') == München.txt (u'M\xfcnchen.txt')
|
As a result of this Unicode quirk, it’s important to ensure that all input is normalized in the same way or, for example, a user-provided query (which may be NFC- or NFD-normalized) may not match JSON data pulled from an API (which is probably NFC-normalized) even though they are ostensibly the same.
Normalization with Alfred-Workflow¶
Note
This behaviour of Alfred-Workflow is not 100% correct. There are some strings (notably in Asian alphabets) that cannot be represented in all normalization forms, particularly NFC, which Alfred-Workflow uses by default. However, I decided to NFC-normalize all text you will get from Alfred-Workflow by default, as this will work as expected in 99+% of cases, and insulate Alfred-Workflow users from much of the pain of text encoding.
By default, Workflow
and web
return command
line arguments from Alfred and text/decoded JSON data respectively as
NFC-normalized Unicode strings.
This is the default for Python. You can change this via the normalization
keyword to Workflow
(this will, however, not affect
web
, which always returns NFC-encoded Unicode
strings).
If your workflow works with data from the system (via subprocess
,
os.listdir()
etc.), you should probably be NFC-normalizing those
strings or changing the default normalization to NFD
, which is (more or
less) what OS X uses. Workflow.decode()
can help with this.
Unfortunately, there is no bulletproof solution, as the query from Alfred can have different normalization forms.
If you pass a Unicode string to Workflow.decode()
,
it will be normalized using the form passed in the normalization
argument
to Workflow.decode()
or to Workflow
on instantiation.
If you pass an encoded string, it will be decoded to Unicode with the encoding
passed in the encoding
argument to Workflow.decode()
or the input_encoding
argument to Workflow
on
instantiation and then normalized as above.
Other Gotchas¶
Well, only one big gotcha. Namely, your shell probably has a sensible encoding
(i.e. UTF-8) set via the LANG
environmental variable (execute echo
$LANG
to check). Although this won’t affect Python 2’s auto-promotion of
encoded strings (str
objects) to Unicode (it always uses ASCII), it does
affect the printing of Unicode strings, so using print()
may work
perfectly in your shell where the environmental encoding is UTF-8 but not in
Alfred, where encoding is ASCII by default.
Be sure to print Unicode strings with
print(my_unicode_string.encode('utf-8'))
(e.g. when passing an argument to
an Open URL Action or Post Notification Output)!
Further information¶
If you’re unfamiliar with using Unicode in Python, have a look at the official Python Unicode HOWTO.
API documentation¶
Documetation of the Alfred-Workflow APIs generated from the source code. A handy reference if (like me) you sometimes forget parameter names.
Alfred-Workflow API¶
This API documentation describes how Alfred-Workflow is put together.
See User Manual for documentation focussed on performing specific tasks.
The Workflow Object¶
The Workflow
object is the main interface to this library.
See Workflow setup and skeleton in the User Manual for an example of how to set
up your Python script to best utilise the Workflow
object.
-
class
workflow.workflow.
Workflow
(default_settings=None, update_settings=None, input_encoding=u'utf-8', normalization=u'NFC', capture_args=True, libraries=None, help_url=None)¶ Create new
Workflow
instance.Parameters: - default_settings (
dict
) – default workflow settings. If no settings file exists,Workflow.settings
will be pre-populated withdefault_settings
. - update_settings (
dict
) – settings for updating your workflow from GitHub. This must be adict
that containsgithub_slug
andversion
keys.github_slug
is of the formusername/repo
andversion
must correspond to the tag of a release. See Self-Updating for more information. - input_encoding (
unicode
) – encoding of command line arguments - normalization (
unicode
) – normalisation to apply to CLI args. SeeWorkflow.decode()
for more details. - capture_args (
Boolean
) – capture and act onworkflow:*
arguments. See Magic arguments for details. - libraries (
tuple
orlist
) – sequence of paths to directories containing libraries. These paths will be prepended tosys.path
. - help_url (
unicode
orstr
) – URL to webpage where a user can ask for help with the workflow, report bugs, etc. This could be the GitHub repo or a page on AlfredForum.com. If your workflow throws an error, this URL will be displayed in the log and Alfred’s debugger. It can also be opened directly in a web browser with theworkflow:help
magic argument.
-
add_item
(title, subtitle=u'', modifier_subtitles=None, arg=None, autocomplete=None, valid=False, uid=None, icon=None, icontype=None, type=None, largetext=None, copytext=None)¶ Add an item to be output to Alfred
Parameters: - title (
unicode
) – Title shown in Alfred - subtitle (
unicode
) – Subtitle shown in Alfred - modifier_subtitles (
dict
) – Subtitles shown when modifier (CMD, OPT etc.) is pressed. Use adict
with the lowercase keyscmd
,ctrl
,shift
,alt
andfn
- arg (
unicode
) – Argument passed by Alfred as{query}
when item is actioned - autocomplete (
unicode
) – Text expanded in Alfred when item is TABbed - valid (
Boolean
) – Whether or not item can be actioned - uid (
unicode
) – Used by Alfred to remember/sort items - icon (
unicode
) – Filename of icon to use - icontype (
unicode
) – Type of icon. Must be one ofNone
,'filetype'
or'fileicon'
. Use'filetype'
whenicon
is a filetype such as'public.folder'
. Use'fileicon'
when you wish to use the icon of the file specified asicon
, e.g.icon='/Applications/Safari.app', icontype='fileicon'
. Leave as None ificon
points to an actual icon file. - type (
unicode
) – Result type. Currently only'file'
is supported (by Alfred). This will tell Alfred to enable file actions for this item. - largetext (
unicode
) – Text to be displayed in Alfred’s large text box if user presses CMD+L on item. - copytext (
unicode
) – Text to be copied to pasteboard if user presses CMD+C on item.
Returns: Item
instanceSee the Script Filter Results and the XML Format section of the documentation for a detailed description of what the various parameters do and how they interact with one another.
See System icons for a list of the supported system icons.
Note
Although this method returns an
Item
instance, you don’t need to hold onto it or worry about it. All generatedItem
instances are also collected internally and sent to Alfred whensend_feedback()
is called.The generated
Item
is only returned in case you want to edit it or do something with it other than send it to Alfred.- title (
-
alfred_env
¶ Alfred’s environmental variables minus the
alfred_
prefix.New in version 1.7.
The variables Alfred 2.4+ exports are:
Variable Description alfred_preferences Path to Alfred.alfredpreferences (where your workflows and settings are stored). alfred_preferences_localhash Machine-specific preferences are stored in Alfred.alfredpreferences/preferences/local/<hash>
(seealfred_preferences
above for the path toAlfred.alfredpreferences
)alfred_theme ID of selected theme alfred_theme_background Background colour of selected theme in format rgba(r,g,b,a)
alfred_theme_subtext Show result subtext. 0
= Always,1
= Alternative actions only,2
= Selected result only,3
= Neveralfred_version Alfred version number, e.g. '2.4'
alfred_version_build Alfred build number, e.g. 277
alfred_workflow_bundleid Bundle ID, e.g. net.deanishe.alfred-mailto
alfred_workflow_cache Path to workflow’s cache directory alfred_workflow_data Path to workflow’s data directory alfred_workflow_name Name of current workflow alfred_workflow_uid UID of workflow Note: all values are Unicode strings except
version_build
andtheme_subtext
, which are integers.Returns: dict
of Alfred’s environmental variables without thealfred_
prefix, e.g.preferences
,workflow_data
.
-
args
¶ Return command line args as normalised unicode.
Args are decoded and normalised via
decode()
.The encoding and normalisation are the
input_encoding
andnormalization
arguments passed toWorkflow
(UTF-8
andNFC
are the defaults).If
Workflow
is called withcapture_args=True
(the default),Workflow
will look for certainworkflow:*
args and, if found, perform the corresponding actions and exit the workflow.See Magic arguments for details.
-
bundleid
¶ Workflow bundle ID from environmental vars or
info.plist
.Returns: bundle ID Return type: unicode
-
cache_data
(name, data)¶ Save
data
to cache undername
.If
data
isNone
, the corresponding cache file will be deleted.Parameters: - name – name of datastore
- data – data to store. This may be any object supported by the cache serializer
-
cache_serializer
¶ Name of default cache serializer.
New in version 1.8.
This serializer is used by
cache_data()
andcached_data()
See
SerializerManager
for details.Returns: serializer name Return type: unicode
-
cached_data
(name, data_func=None, max_age=60)¶ Retrieve data from cache or re-generate and re-cache data if stale/non-existant. If
max_age
is 0, return cached data no matter how old.Parameters: - name – name of datastore
- data_func (
callable
) – function to (re-)generate data. - max_age (
int
) – maximum age of cached data in seconds
Returns: cached data, return value of
data_func
orNone
ifdata_func
is not set
-
cached_data_age
(name)¶ Return age of data cached at name in seconds or 0 if cache doesn’t exist
Parameters: name ( unicode
) – name of datastoreReturns: age of datastore in seconds Return type: int
-
cached_data_fresh
(name, max_age)¶ Is data cached at name less than max_age old?
Parameters: - name – name of datastore
- max_age (
int
) – maximum age of data in seconds
Returns: True
if data is less thanmax_age
old, elseFalse
-
cachedir
¶ Path to workflow’s cache directory.
The cache directory is a subdirectory of Alfred’s own cache directory in
~/Library/Caches
. The full path is:~/Library/Caches/com.runningwithcrayons.Alfred-2/Workflow Data/<bundle id>
Returns: full path to workflow’s cache directory Return type: unicode
-
cachefile
(filename)¶ Return full path to
filename
within your workflow’scache directory
.Parameters: filename ( unicode
) – basename of fileReturns: full path to file within cache directory Return type: unicode
-
check_update
(force=False)¶ Call update script if it’s time to check for a new release
New in version 1.9.
The update script will be run in the background, so it won’t interfere in the execution of your workflow.
See Self-updating in the User Manual for detailed information on how to enable your workflow to update itself.
Parameters: force ( Boolean
) – Force update check
-
clear_cache
(filter_func=<function <lambda>>)¶ Delete all files in workflow’s
cachedir
.Parameters: filter_func ( callable
) – Callable to determine whether a file should be deleted or not.filter_func
is called with the filename of each file in the data directory. If it returnsTrue
, the file will be deleted. By default, all files will be deleted.
-
clear_data
(filter_func=<function <lambda>>)¶ Delete all files in workflow’s
datadir
.Parameters: filter_func ( callable
) – Callable to determine whether a file should be deleted or not.filter_func
is called with the filename of each file in the data directory. If it returnsTrue
, the file will be deleted. By default, all files will be deleted.
-
clear_settings
()¶ Delete workflow’s
settings_path
.
-
data_serializer
¶ Name of default data serializer.
New in version 1.8.
This serializer is used by
store_data()
andstored_data()
See
SerializerManager
for details.Returns: serializer name Return type: unicode
-
datadir
¶ Path to workflow’s data directory.
The data directory is a subdirectory of Alfred’s own data directory in
~/Library/Application Support
. The full path is:~/Library/Application Support/Alfred 2/Workflow Data/<bundle id>
Returns: full path to workflow data directory Return type: unicode
-
datafile
(filename)¶ Return full path to
filename
within your workflow’sdata directory
.Parameters: filename ( unicode
) – basename of fileReturns: full path to file within data directory Return type: unicode
-
decode
(text, encoding=None, normalization=None)¶ Return
text
as normalised unicode.If
encoding
and/ornormalization
isNone
, theinput_encoding``and ``normalization
parameters passed toWorkflow
are used.Parameters: - text (encoded or Unicode string. If
text
is already a Unicode string, it will only be normalised.) – string - encoding (
unicode
orNone
) – The text encoding to use to decodetext
to Unicode. - normalization (
unicode
orNone
) – The nomalisation form to apply totext
.
Returns: decoded and normalised
unicode
Workflow
uses “NFC” normalisation by default. This is the standard for Python and will work well with data from the web (viaweb
orjson
).OS X, on the other hand, uses “NFD” normalisation (nearly), so data coming from the system (e.g. via
subprocess
oros.listdir()
/os.path
) may not match. You should either normalise this data, too, or change the default normalisation used byWorkflow
.- text (encoded or Unicode string. If
-
delete_password
(account, service=None)¶ Delete the password stored at
service/account
. RaisesPasswordNotFound
if account is unknown.Parameters: - account (
unicode
) – name of the account the password is for, e.g. “Pinboard” - service (
unicode
) – Name of the service. By default, this is the workflow’s bundle ID
- account (
-
dumbify_punctuation
(text)¶ Convert non-ASCII punctuation to closest ASCII equivalent.
This method replaces “smart” quotes and n- or m-dashes with their workaday ASCII equivalents. This method is currently not used internally, but exists as a helper method for workflow authors.
Parameters: text ( unicode
) – text to convertReturns: text with only ASCII punctuation Return type: unicode
-
filter
(query, items, key=<function <lambda>>, ascending=False, include_score=False, min_score=0, max_results=0, match_on=127, fold_diacritics=True)¶ Fuzzy search filter. Returns list of
items
that matchquery
.query
is case-insensitive. Any item that does not contain the entirety ofquery
is rejected.Warning
If
query
is an empty string or contains only whitespace, aValueError
will be raised.Parameters: - query (
unicode
) – query to test items against - items (
list
ortuple
) – iterable of items to test - key (
callable
) – function to get comparison key fromitems
. Must return aunicode
string. The default simply returns the item. - ascending (
Boolean
) – set toTrue
to get worst matches first - include_score (
Boolean
) – Useful for debugging the scoring algorithm. IfTrue
, results will be a list of tuples(item, score, rule)
. - min_score (
int
) – If non-zero, ignore results with a score lower than this. - max_results (
int
) – If non-zero, prune results list to this length. - match_on (
int
) – Filter option flags. Bitwise-combined list ofMATCH_*
constants (see below). - fold_diacritics (
Boolean
) – Convert search keys to ASCII-only characters ifquery
only contains ASCII characters.
Returns: list of
items
matchingquery
or list of(item, score, rule)
tuples ifinclude_score
isTrue
.rule
is theMATCH_*
rule that matched the item.Return type: list
Matching rules
By default,
filter()
uses all of the following flags (i.e.MATCH_ALL
). The tests are always run in the given order:MATCH_STARTSWITH
: Item search key startswith``query``(case-insensitive).
MATCH_CAPITALS
: The list of capital letters in itemsearch key starts with
query
(query
may be lower-case). E.g.,of
would matchOmniFocus
,gc
would matchGoogle Chrome
MATCH_ATOM
: Search key is split into “atoms” onnon-word characters (.,-,’ etc.). Matches if
query
is one of these atoms (case-insensitive).
MATCH_INITIALS_STARTSWITH
: Initials are the firstcharacters of the above-described “atoms” (case-insensitive).
MATCH_INITIALS_CONTAIN
:query
is a substring ofthe above-described initials.
MATCH_INITIALS
: Combination of (4) and (5).MATCH_SUBSTRING
: Match ifquery
is a substringof item search key (case-insensitive).
MATCH_ALLCHARS
: Matches if all characters inquery
appear in item search key in the same order (case-insensitive).
MATCH_ALL
: Combination of all the above.
MATCH_ALLCHARS
is considerably slower than the other tests and provides much less accurate results.Examples:
To ignore
MATCH_ALLCHARS
(tends to provide the worst matches and is expensive to run), usematch_on=MATCH_ALL ^ MATCH_ALLCHARS
.To match only on capitals, use
match_on=MATCH_CAPITALS
.To match only on startswith and substring, use
match_on=MATCH_STARTSWITH | MATCH_SUBSTRING
.Diacritic folding
New in version 1.3.
If
fold_diacritics
isTrue
(the default), andquery
contains only ASCII characters, non-ASCII characters in search keys will be converted to ASCII equivalents (e.g. ü -> u, ß -> ss, é -> e).See
ASCII_REPLACEMENTS
for all replacements.If
query
contains non-ASCII characters, search keys will not be altered.- query (
-
first_run
¶ Return
True
if it’s the first time this version has run.New in version 1.9.10.
Raises a
ValueError
ifversion
isn’t set.
-
fold_to_ascii
(text)¶ Convert non-ASCII characters to closest ASCII equivalent.
New in version 1.3.
Note
This only works for a subset of European languages.
Parameters: text ( unicode
) – text to convertReturns: text containing only ASCII characters Return type: unicode
-
get_password
(account, service=None)¶ Retrieve the password saved at
service/account
. RaisePasswordNotFound
exception if password doesn’t exist.Parameters: - account (
unicode
) – name of the account the password is for, e.g. “Pinboard” - service (
unicode
) – Name of the service. By default, this is the workflow’s bundle ID
Returns: account password
Return type: unicode
- account (
-
item_class
¶ alias of
Item
-
last_version_run
¶ Return version of last version to run (or
None
)New in version 1.9.10.
Returns: Version
instance orNone
-
logfile
¶ Return path to logfile
Returns: path to logfile within workflow’s cache directory Return type: unicode
-
logger
¶ Create and return a logger that logs to both console and a log file.
Use
open_log()
to open the log file in Console.Returns: an initialised Logger
-
magic_arguments
= None¶ Mapping of available magic arguments. The built-in magic arguments are registered by default. To add your own magic arguments (or override built-ins), add a key:value pair where the key is what the user should enter (prefixed with
magic_prefix
) and the value is a callable that will be called when the argument is entered. If you would like to display a message in Alfred, the function should return aunicode
string.By default, the magic arguments documented here are registered.
-
magic_prefix
= None¶ The prefix for all magic arguments. Default is
workflow:
-
name
¶ Workflow name from Alfred’s environmental vars or
info.plist
.Returns: workflow name Return type: unicode
-
open_help
()¶ Open
help_url
in default browser
-
open_terminal
()¶ Open a Terminal window at workflow’s
workflowdir
.
-
open_workflowdir
()¶ Open the workflow’s
workflowdir
in Finder.
-
run
(func)¶ Call
func
to run your workflowParameters: func – Callable to call with self
(i.e. theWorkflow
instance) as first argument.func
will be called withWorkflow
instance as first argument.func
should be the main entry point to your workflow.Any exceptions raised will be logged and an error message will be output to Alfred.
-
save_password
(account, password, service=None)¶ Save account credentials.
If the account exists, the old password will first be deleted (Keychain throws an error otherwise).
If something goes wrong, a
KeychainError
exception will be raised.Parameters: - account (
unicode
) – name of the account the password is for, e.g. “Pinboard” - password (
unicode
) – the password to secure - service (
unicode
) – Name of the service. By default, this is the workflow’s bundle ID
- account (
-
send_feedback
()¶ Print stored items to console/Alfred as XML.
-
set_last_version
(version=None)¶ Set
last_version_run
to current versionNew in version 1.9.10.
Parameters: version ( Version
instance orunicode
) – version to store (default is current version)Returns: True
if version is saved, elseFalse
-
settings
¶ Return a dictionary subclass that saves itself when changed.
See Settings in the User Manual for more information on how to use
settings
and important limitations on what it can do.Returns: Settings
instance initialised from the data in JSON file atsettings_path
or if that doesn’t exist, with thedefault_settings
dict
passed toWorkflow
on instantiation.Return type: Settings
instance
-
settings_path
¶ Path to settings file within workflow’s data directory.
Returns: path to settings.json
fileReturn type: unicode
-
start_update
()¶ Check for update and download and install new workflow file
New in version 1.9.
See Self-updating in the User Manual for detailed information on how to enable your workflow to update itself.
Returns: True
if an update is available and will be installed, elseFalse
-
store_data
(name, data, serializer=None)¶ Save data to data directory.
New in version 1.8.
If
data
isNone
, the datastore will be deleted.Note that the datastore does NOT support mutliple threads.
Parameters: - name – name of datastore
- data – object(s) to store. Note: some serializers can only handled certain types of data.
- serializer – name of serializer to use. If no serializer
is specified, the default will be used. See
SerializerManager
for more information.
Returns: data in datastore or
None
-
stored_data
(name)¶ Retrieve data from data directory. Returns
None
if there are no data stored.New in version 1.8.
Parameters: name – name of datastore
-
update_available
¶ Is an update available?
New in version 1.9.
See Self-updating in the User Manual for detailed information on how to enable your workflow to update itself.
Returns: True
if an update is available, elseFalse
-
version
¶ Return the version of the workflow
New in version 1.9.10.
Get the version from the
update_settings
dict passed on instantiation or theversion
file located in the workflow’s root directory. ReturnNone
if neither exist orValueError
if the version number is invalid (i.e. not semantic).Returns: Version of the workflow (not Alfred-Workflow) Return type: Version
object
-
workflowdir
¶ Path to workflow’s root directory (where
info.plist
is).Returns: full path to workflow root directory Return type: unicode
-
workflowfile
(filename)¶ Return full path to
filename
in workflow’s root dir (whereinfo.plist
is).Parameters: filename ( unicode
) – basename of fileReturns: full path to file within data directory Return type: unicode
- default_settings (
-
workflow.workflow.
atomic_writer
(*args, **kwds)¶ Atomic file writer.
Parameters: - file_path (
unicode
) – path of file to write to. - mode (string) – sames as for func:open
New in version 1.12.
Context manager that ensures the file is only written if the write succeeds. The data is first written to a temporary file.
- file_path (
-
class
workflow.workflow.
uninterruptible
(func, class_name=u'')¶ Decorator that postpones SIGTERM until wrapped function is complete.
New in version 1.12.
Since version 2.7, Alfred allows Script Filters to be killed. If your workflow is killed in the middle of critical code (e.g. writing data to disk), this may corrupt your workflow’s data.
Use this decorator to wrap critical functions that must complete. If the script is killed while a wrapped function is executing, the SIGTERM will be caught and handled after your function has finished executing.
Alfred-Workflow uses this internally to ensure its settings, data and cache writes complete.
Important
This decorator is NOT thread-safe.
-
class
workflow.workflow.
KeychainError
¶ Raised by methods
Workflow.save_password()
,Workflow.get_password()
andWorkflow.delete_password()
whensecurity
CLI app returns an unknown error code.
-
class
workflow.workflow.
PasswordNotFound
¶ Raised by method
Workflow.get_password()
whenaccount
is unknown to the Keychain.
-
class
workflow.workflow.
PasswordExists
¶ Raised when trying to overwrite an existing account password.
You should never receive this error: it is used internally by the
Workflow.save_password()
method to know if it needs to delete the old password first (a Keychain implementation detail).
Fetching Data from the Web¶
workflow.web
provides a simple API for retrieving data from the Web
modelled on the excellent requests library.
The purpose of workflow.web
is to cover trivial cases at just 0.5% of
the size of requests.
Features¶
- JSON requests and responses
- Form data submission
- File uploads
- Redirection support
The main API consists of the get()
and post()
functions and
the Response
instances they return.
Warning
As workflow.web
is based on Python 2’s standard HTTP libraries, it
does not verify SSL certificates when establishing HTTPS
connections.
As a result, you must not use this module for sensitive connections.
If you require certificate verification for HTTPS connections (which you
really should), you should use the excellent requests library
(upon which the workflow.web
API is based) or the command-line tool
cURL, which is installed by default on OS X, instead.
Examples¶
There are some examples of using workflow.web
in other parts of the
documentation:
API¶
get()
and post()
are wrappers around request()
. They all
return Response
objects.
-
workflow.web.
get
(url, params=None, headers=None, cookies=None, auth=None, timeout=60, allow_redirects=True)¶ Initiate a GET request. Arguments as for
request()
.Returns: Response
instance
-
workflow.web.
post
(url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=60, allow_redirects=False)¶ Initiate a POST request. Arguments as for
request()
.Returns: Response
instance
-
workflow.web.
request
(method, url, params=None, data=None, headers=None, cookies=None, files=None, auth=None, timeout=60, allow_redirects=False)¶ Initiate an HTTP(S) request. Returns
Response
object.Parameters: - method (
unicode
) – ‘GET’ or ‘POST’ - url (
unicode
) – URL to open - params (
dict
) – mapping of URL parameters - data (
dict
orstr
) – mapping of form data{'field_name': 'value'}
orstr
- headers (
dict
) – HTTP headers - cookies (
dict
) – cookies to send to server - files (
dict
) – files to upload (see below). - auth (
tuple
) – username, password - timeout (
int
) – connection timeout limit in seconds - allow_redirects (
Boolean
) – follow redirections
Returns: Response
objectThe
files
argument is a dictionary:{'fieldname' : { 'filename': 'blah.txt', 'content': '<binary data>', 'mimetype': 'text/plain'} }
fieldname
is the name of the field in the HTML form.mimetype
is optional. If not provided,mimetypes
will be used to guess the mimetype, orapplication/octet-stream
will be used.
- method (
The Response object¶
-
class
workflow.web.
Response
(request)¶ Returned by
request()
/get()
/post()
functions.A simplified version of the
Response
object in therequests
library.>>> r = request('http://www.google.com') >>> r.status_code 200 >>> r.encoding ISO-8859-1 >>> r.content # bytes <html> ... >>> r.text # unicode, decoded according to charset in HTTP header/meta tag u'<html> ...' >>> r.json() # content parsed as JSON
-
iter_content
(chunk_size=4096, decode_unicode=False)¶ Iterate over response data.
New in version 1.6.
Parameters: - chunk_size (
int
) – Number of bytes to read into memory - decode_unicode (
Boolean
) – Decode to Unicode using detected encoding
Returns: iterator
- chunk_size (
-
raise_for_status
()¶ Raise stored error if one occurred.
error will be instance of
urllib2.HTTPError
-
save_to_path
(filepath)¶ Save retrieved data to file at
filepath
Parameters: filepath – Path to save retrieved data.
-
Background Tasks¶
New in version 1.4.
Run scripts in the background.
This module allows your workflow to execute longer-running processes, e.g. updating the data cache from a webservice, in the background, allowing the workflow to remain responsive in Alfred.
For example, if your workflow requires up-to-date exchange rates, you might
write a script update_exchange_rates.py
to retrieve the data from the
relevant webservice, and call it from your main workflow script:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | from workflow import Workflow, ICON_INFO
from workflow.background import run_in_background, is_running
def main(wf):
# Is cache over 1 hour old or non-existent?
if not wf.cached_data_fresh('exchange-rates', 3600):
run_in_background('update',
['/usr/bin/python',
wf.workflowfile('update_exchange_rates.py')])
# Add a notification if the script is running
if is_running('update'):
wf.add_item('Updating exchange rates...', icon=ICON_INFO)
# max_age=0 will return the cached data regardless of age
exchange_rates = wf.cached_data('exchage-rates', max_age=0)
# Display (possibly stale) cached data
if exchange_rates:
for rate in exchange_rates:
wf.add_item(rate)
# Send results to Alfred
wf.send_feedback()
if __name__ == '__main__':
wf = Workflow()
wf.run(main)
|
For a working example, see Part 2: A Distribution-Ready Pinboard Workflow.
API¶
-
workflow.background.
run_in_background
(name, args, **kwargs)¶ Pickle arguments to cache file, then call this script again via
subprocess.call()
.Parameters: - name (
unicode
) – name of task - args – arguments passed as first argument to
subprocess.call()
- **kwargs – keyword arguments to
subprocess.call()
Returns: exit code of sub-process
Return type: int
When you call this function, it caches its arguments and then calls
background.py
in a subprocess. The Python subprocess will load the cached arguments, fork into the background, and then run the command you specified.This function will return as soon as the
background.py
subprocess has forked, returning the exit code of that process (i.e. not of the command you’re trying to run).If that process fails, an error will be written to the log file.
If a process is already running under the same name, this function will return immediately and will not run the specified command.
- name (
-
workflow.background.
is_running
(name)¶ Test whether task is running under
name
Parameters: name ( unicode
) – name of taskReturns: True
if task with namename
is running, elseFalse
Return type: Boolean
Self-Updating¶
New in version 1.9.
Add self-updating capabilities to your workflow. It regularly (every day by default) fetches the latest releases from the specified GitHub repository.
Currently, only updates from GitHub releases are supported.
Note
Alfred-Workflow will check for updates, but will neither install them nor notify the user that an update is available.
Please see Self-updating in the User Manual for information on how to enable automatic updates in your workflow.
API¶
Self-updating from GitHub
New in version 1.9.
Note
This module is not intended to be used directly. Automatic updates
are controlled by the update_settings
dict
passed to
Workflow
objects.
-
class
workflow.update.
Version
(vstr)¶ Bases:
object
Mostly semantic versioning
The main difference to proper semantic versioning is that this implementation doesn’t require a minor or patch version.
-
match_version
()¶ Match version and pre-release/build information in version strings
-
tuple
¶ Return version number as a tuple of major, minor, patch, pre-release
-
-
workflow.update.
build_api_url
(slug)¶ Generate releases URL from GitHub slug
Parameters: slug – Repo name in form username/repo
Returns: URL to the API endpoint for the repo’s releases
-
workflow.update.
check_update
(github_slug, current_version)¶ Check whether a newer release is available on GitHub
Parameters: - github_slug –
username/repo
for workflow’s GitHub repo - current_version (
unicode
) – the currently installed version of the workflow. Semantic versioning is required.
Returns: True
if an update is available, elseFalse
If an update is available, its version number and download URL will be cached.
- github_slug –
-
workflow.update.
download_workflow
(url)¶ Download workflow at
url
to a local temporary fileParameters: url – URL to .alfredworkflow file in GitHub repo Returns: path to downloaded file
-
workflow.update.
get_valid_releases
(github_slug)¶ Return list of all valid releases
Parameters: github_slug – username/repo
for workflow’s GitHub repoReturns: list of dicts. Each dict
has the form{'version': '1.1', 'download_url': 'http://github.com/...'}
A valid release is one that contains one
.alfredworkflow
file.If the GitHub version (i.e. tag) is of the form
v1.1
, the leadingv
will be stripped.
-
workflow.update.
install_update
(github_slug, current_version)¶ If a newer release is available, download and install it
Parameters: - github_slug –
username/repo
for workflow’s GitHub repo - current_version (
unicode
) – the currently installed version of the workflow. Semantic versioning is required.
If an update is available, it will be downloaded and installed.
Returns: True
if an update is installed, elseFalse
- github_slug –
-
workflow.update.
wf
()¶
Serialization¶
Workflow
has
several methods for storing persistent data
to your workflow’s data and cache directories. By default these are stored as
Python pickle
objects using CPickleSerializer
(with
the file extension .cpickle
).
You may, however, want to serialize your data in a different format, e.g. JSON,
to make it user-readable/-editable or to interface with other software, and
the SerializerManager
and data storage/caching APIs enable
you to do this.
For more information on how to change the default serializers, specify alternative ones and register new ones, see Persistent data and Serialization of stored/cached data in the User Manual.
API¶
-
class
workflow.workflow.
SerializerManager
¶ Contains registered serializers.
New in version 1.8.
A configured instance of this class is available at
workflow.manager
.Use
register()
to register new (or replace existing) serializers, which you can specify by name when callingWorkflow
data storage methods.See Serialization of stored/cached data and Persistent data for further information.
-
register
(name, serializer)¶ Register
serializer
object undername
.Raises
AttributeError
ifserializer
in invalid.Note
name
will be used as the file extension of the saved files.Parameters: - name (
unicode
orstr
) – Name to registerserializer
under - serializer – object with
load()
anddump()
methods
- name (
-
serializer
(name)¶ Return serializer object for
name
orNone
if no such serializer is registeredParameters: name ( unicode
orstr
) – Name of serializer to returnReturns: serializer object or None
-
serializers
¶ Return names of registered serializers
-
unregister
(name)¶ Remove registered serializer with
name
Raises a
ValueError
if there is no such registered serializer.Parameters: name ( unicode
orstr
) – Name of serializer to removeReturns: serializer object
-
-
class
workflow.workflow.
JSONSerializer
¶ Wrapper around
json
. Setsindent
andencoding
.New in version 1.8.
Use this serializer if you need readable data files. JSON doesn’t support Python objects as well as
cPickle
/pickle
, so be careful which data you try to serialize as JSON.-
classmethod
dump
(obj, file_obj)¶ Serialize object
obj
to open JSON file.New in version 1.8.
Parameters: - obj (JSON-serializable data structure) – Python object to serialize
- file_obj (
file
object) – file handle
-
classmethod
-
class
workflow.workflow.
CPickleSerializer
¶ Wrapper around
cPickle
. Setsprotocol
.New in version 1.8.
This is the default serializer and the best combination of speed and flexibility.
-
classmethod
dump
(obj, file_obj)¶ Serialize object
obj
to open pickle file.New in version 1.8.
Parameters: - obj (Python object) – Python object to serialize
- file_obj (
file
object) – file handle
-
classmethod
-
class
workflow.workflow.
PickleSerializer
¶ Wrapper around
pickle
. Setsprotocol
.New in version 1.8.
Use this serializer if you need to add custom pickling.
-
classmethod
dump
(obj, file_obj)¶ Serialize object
obj
to open pickle file.New in version 1.8.
Parameters: - obj (Python object) – Python object to serialize
- file_obj (
file
object) – file handle
-
classmethod
Script Filter results and the XML format¶
An in-depth look at Alfred’s XML format, the many parameters accepted by
Workflow.add_item()
and how they interact with one another.
Note
This should also serve as a decent reference to Alfred’s XML format for folks who aren’t using Alfred-Workflow. The official Alfred 2 XML docs have recently seen a massive update, but historically haven’t been very up-to-date.
Script Filter Results and the XML Format¶
Note
This document is valid as of version 2.5 of Alfred and 1.8.5 of Alfred-Workflow.
Alfred’s Script Filters are its most powerful workflow API and a main focus
of Alfred-Workflow. Script Filters work by receiving a {query}
from
Alfred and returning a list of results as XML.
To build this list of results use the
Workflow.add_item()
method, and then
Workflow.send_feedback()
to send the results back to Alfred.
This document is an attempt to explain how the many options available in the
XML format and Workflow.add_item()
‘s
arguments work.
Danger
As Script Filters use STDOUT
to send their results to Alfred
as XML, you must not print()
or log any output to STDOUT
or it
will break the XML, and Alfred will show no results.
XML format / available parameters¶
Warning
If you’re not using Alfred-Workflow to generate your Script Filter’s output, you should use a real XML library to do so. XML is a lot more finicky that it looks, and it’s fairly easy to create invalid XML. Unless your XML is hard-coded (i.e. never changes), it’s much safer and more reliable to use a proper XML library than to generate your own XML.
This is a valid and complete XML result list containing just one result with
all possible options.
Workflow.send_feedback()
will print something much like this to STDOUT
when called (though it won’t
be as pretty as it will all be on one line).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | <?xml version="1.0" encoding="UTF-8"?>
<items>
<item uid="home" valid="YES" autocomplete="Home Folder" type="file">
<title>Home Folder</title>
<subtitle>Home folder ~/</subtitle>
<subtitle mod="shift">Subtext when shift is pressed</subtitle>
<subtitle mod="fn">Subtext when fn is pressed</subtitle>
<subtitle mod="ctrl">Subtext when ctrl is pressed</subtitle>
<subtitle mod="alt">Subtext when alt is pressed</subtitle>
<subtitle mod="cmd">Subtext when cmd is pressed</subtitle>
<text type="copy">Text when copying</text>
<text type="largetype">Text for LargeType</text>
<icon type="fileicon">~/</icon>
<arg>~/</arg>
</item>
</items>
|
The first line is the standard XML declaration. If you’re generating your own XML, you should probably use a declaration exactly as shown here and ensure your XML is encoded as UTF-8 text. If you’re using Alfred-Workflow, the XML declaration will be generated for you and it will ensure that the XML output is UTF-8-encoded.
The root element must be <items>
(lines 2 and 16).
The <items>
element contains one or more <item>
elements.
To generate the above XML with Alfred-Workflow you would use:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | from workflow import Workflow
wf = Workflow()
wf.add_item(u'Home Folder', # title
u'Home folder ~/', # subtitle
modifier_subtitles={
u'shift': u'Subtext when shift is pressed',
u'fn': u'Subtext when fn is pressed',
u'ctrl': u'Subtext when ctrl is pressed',
u'alt': u'Subtext when alt is pressed',
u'cmd': u'Subtext when cmd is pressed'
},
arg=u'~/',
autocomplete=u'Home Folder',
valid=True,
uid=u'home',
icon=u'~/',
icontype=u'fileicon',
type=u'file',
largetext=u'Text for LargeType',
copytext=u'Text when copying')
# Print XML to STDOUT
wf.send_feedback()
|
Basic example¶
A minimal, valid result looks like this:
<item>
<title>My super title</title>
</item>
Generated with:
wf.add_item(u'My super title')
This will show a result in Alfred with Alfred’s blank workflow icon and “My super title” as its text.
Everything else is optional, but some parameters don’t make much sense without other complementary parameters. Let’s have a look.
Item parameters¶
title¶
This is the large text shown for each result in Alfred’s results list.
Pass to Workflow.add_item()
as
the title
argument or the first unnamed argument. This is the only
required argument and must be unicode
:
wf.add_item(u'My title'[, ...])
or
wf.add_item(title=u'My title'[, ...])
subtitle¶
This is the smaller text shown under each result in Alfred’s results list.
Important
Remember that users can turn off subtitles in Alfred’s settings. If you
don’t want to confuse minimalists, don’t relegate essential information to
the subtitle
. On the other hand, you could argue that users who think
turning off subtitles is okay deserve what they get…
Pass to Workflow.add_item()
as
the subtitle
argument or the second unnamed argument (the first, title
,
is required and must therefore be present.
It’s also possible to specify custom subtitles to be shown when a result is
selected and the user presses one of the modifier keys (cmd
, opt
,
ctrl
, shift
, fn
).
These are specified in the XML file as additional <subtitle>
elements with
mod="<key>"
attributes (see lines 6–10 in the
example XML).
In Alfred-Workflow, you can set modifier-specific subtitles with the
modifier_subtitles
argument to
Workflow.add_item()
, which must
be a dictionary with some or all of the keys alt
, cmd
, ctrl
,
fn
, shift
and the corresponding values set to the unicode
subtitles to be shown when the modifiers are pressed (see lines 7–13 of the
example code).
autocomplete¶
If the user presses TAB
on a result, the query currently shown in Alfred’s
query box will be expanded to the autocomplete
value of the selected result.
If the user presses ENTER
on a result with valid set to no
,
Alfred will expand the query as if the user had pressed TAB
.
Pass to Workflow.add_item()
as
the autocomplete
argument. Must be unicode
.
When a user autocompletes a result with TAB
, Alfred will run the Script
Filter again with the new query.
If no autocomplete
parameter is specified, using TAB
on a result will
have no effect.
arg¶
Pass to Workflow.add_item()
as
the arg
argument. Must be unicode
.
This is the “value” of the result that will be passed by Alfred as {query}
to the Action(s) or Output(s) your Script Filter is connected to when the
result is “actioned” (i.e. by selecting it and hitting ENTER
or using
CMD+NUM
).
Additionally, if you press CMD+C on a result in Alfred, arg
will be copied to
the pasteboard (unless you have set copy text for the
item).
Other than being copyable, setting arg
doesn’t make great deal of sense unless
the item is also valid. An exception isif the item’s
type is file
. In this case, a user can still use File Actions
on an item, even if it is not valid.
Note
arg
may also be specified as an attribute of the <item>
element, but specifying it as a child element of <item>
is more flexible:
you can include newlines within an element, but not within an attribute.
valid¶
Passed to Workflow.add_item()
as
the valid
argument. Must be True
or False
(the default).
In the XML file, valid
is an attribute on the <item>
element and must
have the value of either YES
or NO
:
1 2 3 4 5 6 | <item valid="YES">
...
</item>
<item valid="NO">
...
</item>
|
valid
determines whether a user can action a result (i.e with ENTER
or CMD+NUM
) in Alfred’s results list or not ("YES"
/True
meaning they can). If a result has the type file
, users
can still perform File Actions on it (if arg is set to a valid
filepath).
Specifying valid=True
/valid="YES"
has no effect if arg
isn’t set.
uid¶
Pass to Workflow.add_item()
as
the uid
argument. Must be unicode
.
Alfred uses the uid
to uniquely identify a result and apply its “knowledge”
to it. That is to say, if (and only if) a user hits ENTER
on a result with
a uid
, Alfred will associate that result (well, its uid
) with its
current query and prioritise that result for the same query in the future.
As a result, in most situations you should ensure that a particular item always
has the same uid
. In practice, setting uid
to the same value as arg
is often a good choice.
If you omit the uid
, Alfred will show results in the order in which they
appear in the XML file (the order in which you add them with
Workflow.add_item()
).
type¶
The type of the result. Currently, only file
and file:skipcheck
are
supported.
Pass to Workflow.add_item()
as
the type
argument. Should be unicode
. Currently, the only allowed
value is file
.
If the type
of a result is set to file
(the only value currently
supported by Alfred), it will enable users to “action” the item, as in Alfred’s
file browser, and show Alfred’s File Actions (Open
, Open with…
,
Reveal in Finder
etc.) using the default keyboard shortcut set in
Alfred Preferences > File Search > Actions > Show Actions
.
If type
is set to file:skipcheck
, Alfred won’t test to see if the file
specified as arg actually exists. This will save a tiny bit of
time if you’re sure the file exists.
For File Actions to work, arg must be set to a valid filepath, but it is not necessary for the item to be valid.
copy text¶
Text that will be copied to the pasteboard if a user presses CMD+C
on a
result.
Pass to Workflow.add_item()
as
the copytext
argument. Must be unicode
.
Set using <text type="copy">Copy text goes here</text>
in XML.
If copytext
is set, when the user presses CMD+C
, this will be copied to
the pasteboard and Alfred’s window will close. If copytext
is not set, the
selected result’s arg value will be copied to the pasteboard
and Alfred’s window will close. If neither is set, nothing will be copied to
the pasteboard and Alfred’s window will close.
large text¶
Text that will be displayed in Alfred’s Large Type pop-up if a user presses
CMD+L
on a result.
Pass to Workflow.add_item()
as
the largetext
argument. Must be unicode
.
Set using <text type="largetype">Large text goes here</text>
in XML.
If largetext
is not set, when the user presses CMD+L
on a result, Alfred
will display the current query in its Large Type pop-up.
icon¶
There are three different kinds of icon you can tell Alfred to use. Use the
type
attribute of the <icon>
XML element or the icontype
argument
to Alfred.add_item()
to define which type of icon you want.
Image files¶
This is the default. Simply pass the filename or filepath of an image file:
<icon>icon.png</icon>
or:
Workflow.add_item(..., icon=u'icon.png')
Relative paths will be interpreted by Alfred as relative to the root of your
workflow directory, so icon.png
will be your workflow’s own icon,
icons/github.png
is the file github.png
in the icons
subdirectory
of your workflow etc.
You can pass paths to PNG
or ICNS
files. If you’re using PNG
, you
should try to make them square and ideally 256 px wide/high. Anything bigger
and Alfred will have to resize the icon; smaller and it won’t look so good on a
Retina screen.
File icons¶
Alternatively, you can tell Alfred to use the icon of a file:
<icon type="fileicon">/path/to/some/file.pdf</icon>
or:
Workflow.add_item(..., icon=u'/path/to/some/file.pdf',
icontype=u'fileicon')
This is great if your workflow lists the user’s own files, and makes your Script Filter work like Alfred’s File Browser or File Filters in that by passing the file’s path as the icon, Alfred will show the appropriate icon for that file.
If you have set a custom icon for, e.g., your Downloads folder, this custom icon will be shown. In the case of media files that have cover art, e.g. audio files, movies, ebooks, comics etc., any cover art will not be shown, but rather the standard icon for the appropriate filetype.
Filetype icons¶
Finally, you can tell Alfred to use the icon for a specific filetype by
specifying a UTI as the value
to icon
and filetype
as the type:
<icon type="filetype">public.html</icon>
or:
Workflow.add_item(..., icon=u'public.html', icontype=u'filetype')
This will show the icon for HTML
pages, which will be different depending
on which browser you have set as the default.
filetype
icons are useful if your Script Filter deals with files and
filetypes but you don’t have a specific filepath to use as a fileicon
.
Tip
If you need to find the UTI for a filetype, Alfred can help you.
Add a File Filter to a workflow, and drag a file of the type you’re
interested in into the File Types list in the Basic Setup tab. Alfred will
show the corresponding UTI in the list (in this screenshot, I dragged a
.py
file into the list):

You can also find the UTI of a file (along with much of its other metadata)
by running mdls /path/to/the/file
in Terminal.
Workflows using Alfred-Workflow¶
This is a list of some of the workflows based on Alfred-Workflow.
Workflows using Alfred-Workflow¶
Here are some workflows that are made with Alfred-Workflow. Have a poke around in their repos for inspiration.
Adding your own workflow to the list¶
If you’d like your own workflow added to the list, please see the corresponding section in the GitHub README.
- Alfred Backblaze (GitHub repo) by XedMada (on GitHub). Pause and Start Backblaze online backups.
- Alfred Dependency Bundler Demo (Python) (GitHub repo) by deanishe (on GitHub). Demonstration on how to use the Alfred Bundler in Python.
- Alfred Soundboard by Steffen. A soundboard for alfred at your fingertips.
- AppScripts (GitHub repo) by deanishe (on GitHub). List, search and run/open AppleScripts for the active application.
- Base Converter (GitHub repo) by ahalbert (on GitHub). Convert arbitrary bases(up to base 32) in Alfred 2 and copy them to the clipboard.
- BeautifulRatio (GitHub repo) by yusuga (on GitHub). This workflow calculates the Golden ratio and Silver ratio.
- Better IMDB search by frankspin. Search IMDB for movies and see results inside of Alfred.
- BibQuery (GitHub repo) by hackademic (on GitHub). Search BibDesk from the comfort of your keyboard.
- Blur by Tyler Eich. Set Alfred’s background blur radius.
- Calendar (GitHub repo) by owenwater (on GitHub). Displays a monthly calendar with Alfred Workflow.
- Code Case by dfay. Case Converter for Code.
- Codebox (GitHub repo) by danielecook (on GitHub). Search codebox snippets.
- Continuity Support by dmarshall. Enables calling and messaging via contacts or number input.
- Convert (GitHub repo) by deanishe (on GitHub). Convert between different units. No Internet connection required.
- Date Calculator (GitHub repo) by MuppetGate (on GitHub). A basic date calculator.
- Digital Ocean status (GitHub repo) by frankspin (on GitHub). Control your Digital Ocean droplets.
- Display Brightness (GitHub repo) by fniephaus (on GitHub). Adjust your display’s brightness with Alfred.
- Dropbox Client for Alfred (GitHub repo) by fniephaus (on GitHub). Access multiple Dropbox accounts with Alfred.
- Duden Search (GitHub repo) by deanishe (on GitHub). Search duden.de German dictionary (with auto-suggest).
- Fabric for Alfred by fniephaus. Quickly execute Fabric tasks.
- Fakeum (GitHub repo) by deanishe (on GitHub). Generate fake test data in Alfred.
- Forvo (GitHub repo) by owenwater (on GitHub). A pronunciation workflow based on Forvo.com.
- Fuzzy Folders (GitHub repo) by deanishe (on GitHub). Fuzzy search across folder subtrees.
- Genymotion (GitHub repo) by yakiyama (on GitHub). Start emulator instantly.
- Git Repos (GitHub repo) by deanishe (on GitHub). Browse, search and open Git repositories from within Alfred.
- Glosbe Translation by deanishe. Translate text using Glosbe.com.
- Gmail Client for Alfred (GitHub repo) by fniephaus (on GitHub). Manage your Gmail inbox with Alfred.
- Google Drive (GitHub repo) by azai91 (on GitHub). Browse, search and open Google Drive files from within Alfred.
- HackerNews for Alfred (GitHub repo) by fniephaus (on GitHub). Read Hacker News with Alfred.
- HGNC Search (GitHub repo) by danielecook (on GitHub). Search for human genes.
- Homebrew and Cask for Alfred (GitHub repo) by fniephaus (on GitHub). Easily control Homebrew and Cask with Alfred.
- IME (GitHub repo) by owenwater (on GitHub). A Input method workflow based on Google Input Tools.
- iOS Simulator (GitHub repo) by jfro (on GitHub). Workflow for finding simulator app data folders, erasing apps and more.
- IPython Notebooks (GitHub repo) by nkeim (on GitHub). Search notebook titles on your IPython notebook server.
- Jenkins (GitHub repo) by Amwam (on GitHub). Show and search through jobs on Jenkins.
- Julian Date calculator (GitHub repo) by Tam-Lin (on GitHub). Converts dates to/from Julian dates, as well as some date math.
- KA Torrents by hackademic. Search and download torrents from kickass.so.
- Laser SSH by paperElectron. Choose SSH connection from filterable list.
- LastPass Vault Manager (GitHub repo) by bachya (on GitHub). A workflow to interact with a LastPass vault.
- LibGen (GitHub repo) by hackademic (on GitHub). Search and Download pdfs and ebooks from Library Genesis.
- MailTo (GitHub repo) by deanishe (on GitHub). Send mail to contacts and groups from your Address Book.
- Movie and TV Show Search (GitHub repo) by tone (on GitHub). Search for movies and tv shows to find ratings from a few sites.
- Network Location (GitHub repo) by deanishe (on GitHub). List, filter and activate network locations from within Alfred.
- Packal Workflow Search (GitHub repo) by deanishe (on GitHub). Search Packal.org from the comfort of Alfred.
- Pandoctor (GitHub repo) by hackademic (on GitHub). An Alfred GUI for Pandoc.
- Parsers (GitHub repo) by hackademic (on GitHub). Greek and Latin parsers.
- pass (GitHub repo) by mwest (on GitHub). Provide a minimal wrapper over the pass password manager (passwordstore.org).
- Percent Change (GitHub repo) by bkmontgomery (on GitHub). Easily do percentage calculations.
- PHPStorm project opener (GitHub repo) by hansdubois (on GitHub). PHPStorm project opener.
- Pocket for Alfred (GitHub repo) by fniephaus (on GitHub). Manage your Pocket list with Alfred.
- Product Hunt (GitHub repo) by loris (on GitHub). List Product Hunt today’s hunts.
- ProductHunt (GitHub repo) by chiefy (on GitHub). Read ProductHunt in Alfred.
- PWS History (GitHub repo) by hrbrmstr (on GitHub). Retrieve personal weather station history from Weather Underground.
- Quick Stocks by paperElectron. Add some stock symbols for Alfred to check for you.
- Ramda Docs (GitHub repo) by raine (on GitHub). Search Ramda documentation.
- Rates (GitHub repo) by Kennedy Oliveira (on GitHub). Simple exchange rates for alfred.
- Readability for Alfred (GitHub repo) by fniephaus (on GitHub). Manage your Readability list with Alfred.
- Reddit (GitHub repo) by deanishe (on GitHub). Browse Reddit from Alfred.
- Relative Dates (GitHub repo) by deanishe (on GitHub). Generate relative dates based on a simple input format.
- Resolve URL (GitHub repo) by deanishe (on GitHub). Follows any HTTP redirects and returns the canonical URL. Also displays information about the primary host (hostname, IP address(es), aliases).
- Rotten Search (GitHub repo) by yakiyama (on GitHub). Search movie from RottenTomatoes.com.
- Search Omnifocus (GitHub repo) by rhyd (on GitHub). This is a workflow that performs free text searches on OmniFocus data.
- Searchio! (GitHub repo) by deanishe (on GitHub). Auto-suggest search results from multiple search engines and languages.
- Secure Password Generator (GitHub repo) by deanishe (on GitHub). Generate secure random passwords from Alfred. Uses /dev/urandom as source of entropy.
- SEND by hackademic. Send documents to the cloud.
- Seq-utilies (GitHub repo) by danielecook (on GitHub). Fetch complement, reverse complement, RNA, and protein sequences. Generate random DNA. Blast a sequence.
- Simple Timer by Paul Eunjae Lee. A very simple timer.
- Skimmer (GitHub repo) by hackademic (on GitHub). Actions for PDF viewer Skim.
- slackfred (GitHub repo) by frankspin (on GitHub). Interact with the chat service Slack via Alfred (multi-org supported).
- Snippets (GitHub repo) by hackademic (on GitHub). Simple, document-specific text snippets.
- Spritzr (GitHub repo) by hackademic (on GitHub). An Alfred Speed-Reader.
- StackOverflow Search (GitHub repo) by deanishe (on GitHub). Search StackOverflow.com from Alfred.
- Sublime Text Projects (GitHub repo) by deanishe (on GitHub). View, filter and open your Sublime Text (2 and 3) project files.
- Torrent (GitHub repo) by bfw (on GitHub). Search for torrents, choose among the results in Alfred and start the download in uTorrent.
- Travis CI for Alfred by fniephaus. Quickly check build statuses on travis-ci.org.
- UberTime (GitHub repo) by frankspin (on GitHub). Check estimated pick up time for Uber based on inputted address.
- URL craft by takanabe. A workflow that transforms a url into new one that allows some formats such as “Github Flavored Markdown link” or “shorten url” and so on.
- VagrantUP (GitHub repo) by m1keil (on GitHub). List and control Vagrant environments with Alfred2.
- VM Control (GitHub repo) by fniephaus (on GitHub). Control your Parallels and Virtual Box virtual machines.
- Wikify (GitHub repo) by hackademic (on GitHub). Your little Evernote Wiki-Helper.
- Workon Virtualenv (GitHub repo) by johnnycakes79 (on GitHub). Workflow to list and start python virtualenvs (assumes you and have virtualenv and virtualenvwrapper installed).
- Wowhead (GitHub repo) by owenwater (on GitHub). An Alfred workflow that helps you search World of Warcraft® database provided by wowhead.com.
- Wunderlist3.alfredworkflow (GitHub repo) by gnostic (on GitHub). A Wunderlist 3 API cloud-based alfred workflow.
- Youdao Dict (GitHub repo) by WhyLiam (on GitHub). 使用有道翻译你想知道的单词和语句.
- ZotQuery (GitHub repo) by hackademic (on GitHub). Search Zotero. From the Comfort of Your Keyboard.
Feedback, questions, bugs, feature requests¶
If you have feedback or a question regarding Alfred-Workflow, please post in them in the Alfred forum thread.
If you have a bug report or a feature request, please create a new issue on GitHub.
You can also email me at deanishe@deanishe.net with any questions/feedback/bug reports. However, it’s generally better to use the forum/GitHub so that other users can benefit from and contribute to the conversation.