Background processesΒΆ
Many workflows provide a convenient interface to applications and/or web services.
For performance reasons, it’s common for workflows to cache data locally, but updating this cache typically takes a few seconds, making your workflow unresponsive while an update is occurring, which is very un-Alfred-like.
To avoid such delays, Alfred-Workflow provides the background
module to allow you to easily run scripts in the background.
There are two functions, run_in_background()
and
is_running()
, that provide the interface. The
processes started are full daemon processes, so you can start real servers
as easily as simple scripts.
Here’s an example of a common usage pattern (updating cached data in the background). What we’re doing is:
- Checking the age of the cached data and running the update script via
run_in_background()
if the cached data are too old or don’t exist. - (Optionally) informing the user that data are being updated.
- Loading the cached data regardless of age.
- Displaying the cached data (if any).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | from workflow import Workflow, ICON_INFO
from workflow.background import run_in_background, is_running
def main(wf):
# Is cache over 1 hour old or non-existent?
if not wf.cached_data_fresh('exchange-rates', 3600):
run_in_background('update',
['/usr/bin/python',
wf.workflowfile('update_exchange_rates.py')])
# Add a notification if the script is running
if is_running('update'):
wf.add_item('Updating exchange rates...', icon=ICON_INFO)
# max_age=0 will load any cached data regardless of age
exchange_rates = wf.cached_data('exchage-rates', max_age=0)
# Display (possibly stale) cache data
if exchange_rates:
for rate in exchange_rates:
wf.add_item(rate)
# Send results to Alfred
wf.send_feedback()
if __name__ == '__main__':
wf = Workflow()
wf.run(main)
|
For a working example, see Part 2 of the Tutorial or the source code of my Git Repos workflow, which is a bit smarter about showing the user update information.