-
Notifications
You must be signed in to change notification settings - Fork 48
/
search.json
613 lines (613 loc) · 320 KB
/
search.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
{
"entries": [{
"title": "All Articles",
"url": "/articles/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "",
"body": " "
}, {
"title": "IronCache Documentation",
"url": "/cache/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Share state between your applications and processes using a key/value store built for the cloud. 1 Configure Your Client 2 Set the Item in the Cache 3 Retrieve the Item from the Cache 4 Delete the Item from the Cache 1. Configure Your Client Before you can interact with IronCache, you're going to have to configure your library of choice",
"body": "Share state between your applications and processes using a key/value store built for the cloud. 1 Configure Your Client 2 Set the Item in the Cache 3 Retrieve the Item from the Cache 4 Delete the Item from the Cache 1. Configure Your Client Before you can interact with IronCache, you're going to have to configure your library of choice to use your project ID and your OAuth2 token. You can retrieve your project ID and token from the HUD . Our official client libraries all support the same configuration scheme , which you can read up on if you want a highly-customised development environment. To get started though, just save the following as \" iron.json \": iron.json { \"project_id\" : \"INSERT YOUR PROJECT ID HERE\" , \"token\" : \"INSERT YOUR TOKEN HERE\" } You're all configured! The client libraries will all automatically just read that file and use your credentials to authenticate requests. 2. Set the Item in the Cache Adding an item to a cache and updating an item in a cache are the same thing; if the item exists, it will be overwritten. Otherwise, it will be created. You can set an item using any of the client libraries or even using our REST/HTTP API directly. We're going to use the Ruby library for these examples. iron_cache.rb @client = IronCache :: Client . new @cache = @client . cache ( \"test_cache\" ) @cache . put ( \"test_item\" , \"Hello, World!\" ) The cache is technically a key/value store that enforces strings as the key and the value. It is common, however, to use JSON to store complex objects in the cache. The client libraries will all transparently translate all non-string values into their JSON equivalent before storing them in the cache. iron_cache.rb @client = IronCache :: Client . new @cache = @client . cache ( \"test_cache\" ) @cache . put ( \"complex_item\" , { \"greeting\" => \"Hello\" , \"target\" => \"world\" }) The server also detects numbers and stores them natively as numbers, which lets you increment and decrement them atomically. This means that if two clients try to increment the item at the same time, they won't overwrite each other; the item will be incremented twice. This is extremely useful when clients are run in a highly parallel environment like IronWorker. iron_cache.rb @client = IronCache :: Client . new @cache = @client . cache ( \"test_cache\" ) @cache . put ( \"number_item\" , 42 ) # store a number item = @cache . get ( \"number_item\" ) # retrieve the item p item . value # output the value @cache . increment ( \"number_item\" ) # increment the item p @cache . get ( \"number_item\" ) . value # retrieve the item and print its value again Although IronCache was originally designed for short-term storage, it has since evolved into a permanent data storage solution. By default, your cache items will not be deleted until you manually delete them. Sometimes, however, it's still helpful to set an expiration date on data you want to be short-lived. To do this, just use the expires_in parameter to set the number of seconds the data should be available for. iron_cache.rb @client = IronCache :: Client . new @cache = @client . cache ( \"test_cache\" ) @cache . put ( \"long_lived_item\" , 42 , { :expires_in => 60 * 60 * 24 * 30 }) # this item won't expire for a month 3. Retrieve the Item from the Cache Retrieving an item from the cache is fairly straightforward: iron_cache.rb @client = IronCache :: Client . new @cache = @client . cache ( \"test_cache\" ) item = @cache . get ( \"number_item\" ) p item . value Unlike IronMQ, you do not lock an item when you retrieve it from a cache. Two or more clients may retrieve an item at the same time. 4. Delete the Item from the Cache Should you decide you would like to remove an item from a cache before its expiration date (or to remove items with no expiration date), you can do so with a simple API call: iron_cache.rb @client = IronCache :: Client . new @cache = @client . cache ( \"test_cache\" ) @cache . delete ( \"number_item\" ) # remove it Next Steps You should be pretty familiar with IronCache now—now you need to build something cool! To get up and running quickly, you may want to look into our Memcache support. Check out our reference material to explore the boundaries of IronCache's system. If you need suggestions for how to use IronCache, you may want to take a look at our solutions for some inspiration. "
}, {
"title": "IronCache REST/HTTP API",
"url": "/cache/reference/api/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "IronCache provides a REST/HTTP API to allow you to interact programmatically with your caches on IronCache. Table of Contents Endpoints Authentication Requests Base URL Pagination Responses Status Codes Exponential Backoff Endpoints URL HTTP Verb Purpose /projects/ {Project ID} /caches GET List Caches /projects/ {Project ID} /caches/ {Cache Name} GET Get Info About a Cache /projects/ {Project ID} /caches/ {Cache Name}",
"body": "IronCache provides a REST/HTTP API to allow you to interact programmatically with your caches on IronCache. Table of Contents Endpoints Authentication Requests Base URL Pagination Responses Status Codes Exponential Backoff Endpoints URL HTTP Verb Purpose /projects/ {Project ID} /caches GET List Caches /projects/ {Project ID} /caches/ {Cache Name} GET Get Info About a Cache /projects/ {Project ID} /caches/ {Cache Name} DELETE Delete a Cache /projects/ {Project ID} /caches/ {Cache Name} /clear POST Clear a Cache /projects/ {Project ID} /caches/ {Cache Name} /items/ {Key} PUT Put an Item into a Cache /projects/ {Project ID} /caches/ {Cache Name} /items/ {Key} /increment POST Increment an Item's value /projects/ {Project ID} /caches/ {Cache Name} /items/ {Key} GET Get an Item from a Cache /projects/ {Project ID} /caches/ {Cache Name} /items/ {Key} DELETE Delete an Item from a Cache Authentication IronCache uses OAuth2 tokens to authenticate API requests. All methods require authentication unless specified otherwise. You can find and create your API tokens in the HUD . To authenticate your request, you should include a token in the Authorization header for your request or in your query parameters. Tokens are universal, and can be used across services. Note that each request also requires a Project ID to specify which project the action will be performed on. You can find your Project IDs in the HUD . Project IDs are also universal, so they can be used across services as well. Example Authorization Header : Authorization: OAuth abc4c7c627376858 Example Query with Parameters : GET https://cache-aws-us-east-1.iron.io/1/projects/ {Project ID} /caches?oauth=abc4c7c627376858 Notes: Be sure you have the correct case, it's OAuth , not Oauth. In URL parameter form, this will be represented as: ?oauth=abc4c7c627376858 Requests Requests to the API are simple HTTP requests against the API endpoints. All request bodies should be in JSON format, with Content-Type of application/json . Base URL All endpoints should be prefixed with the following: https:// {Host} .iron.io/ {API Version} / API Version Support : IronCache API supports version 1 The domains for the clouds IronCache supports are as follows: Cloud {Domain} AWS cache-aws-us-east-1 Pagination For endpoints that return lists/arrays of values: page - The page of results to return. Default is 0. Maximum is 100. per_page - The number of results to return. It may be less if there aren't enough results. Default is 30. Maximum is 100. Responses All responses are in JSON, with Content-Type of application/json . A response is structured as follows: { \"msg\" : \"some success or error message\" } Status Codes The success or failure of a request is indicated by an HTTP status code. A 2xx status code indicates success, whereas a 4xx or 5xx status code indicates an error. Code Status 200 OK: Successful GET 201 Created: Successful POST 400 Bad Request: Invalid JSON (can't be parsed or has wrong types). 401 Unauthorized: The OAuth token is either not provided or invalid. 403 Project suspected, resource limits. 404 Not Found: The resource, project, or endpoint being requested doesn’t exist. 405 Invalid HTTP method: A GET, POST, DELETE, or PUT was sent to an endpoint that doesn’t support that particular verb. 406 Not Acceptable: Required fields are missing. 500 Internal Server Error: Something went wrong on the server side. 503 Service Unavailable: A temporary error occurred with the request. Clients should implement exponential backoff and retry the request. Specific endpoints may provide other errors in other situations. When there's an error, the response body contains a JSON object similar to the following: { \"msg\" : \"reason for error\" } Exponential Backoff When a 503 error code is returned, it signifies that the server is currently unavailable. This means there was a problem processing the request on the server-side; it makes no comment on the validity of the request. Libraries and clients should use exponential backoff when confronted with a 503 error, retrying their request with increasing delays until it succeeds or a maximum number of retries (configured by the client) has been reached. List Caches Get a list of all caches in a project. 100 caches are listed at a time. To see more, use the page parameter. Endpoint GET /projects/ {Project ID} /caches URL Parameters Project ID : Project these caches belong to Optional URL Parameters page : The 0-based page to view. The default is 0. See pagination . Response [ { \"project_id\" : \"PROJECT ID\" , \"name\" : \"CACHE NAME\" }, { \"project_id\" : \"PROJECT ID\" , \"name\" : \"CACHE NAME\" } ] Get Info About a Cache This call gets general information about a cache. Endpoint GET /projects/ {Project ID} /caches/ {Cache Name} URL Parameters Project ID : Project the cache belongs to Cache Name : The name of the cache Response { \"size\" : \"cache size\" } Delete a Cache Delete a cache and all items in it. Endpoint DELETE /projects/ {Project ID} /caches/ {Cache Name} URL Parameters Project ID : Project the cache belongs to Cache Name : The name of the cache Response { \"msg\" : \"Deleted.\" } Clear a Cache Delete all items in a cache. This cannot be undone. Endpoint POST /projects/ {Project ID} /caches/ {Cache Name} /clear URL Parameters Project ID : Project the cache belongs to Cache Name : The name of the cache whose items should be cleared. Response { \"msg\" : \"Cleared.\" } Put an Item into a Cache This call puts an item into a cache. Endpoint PUT /projects/ {Project ID} /caches/ {Cache Name} /items/ {Key} URL Parameters Project ID : The project these items belong to. Cache Name : The name of the cache. If the cache does not exist, it will be created for you. Key : The key to store the item under in the cache. Item Object Each item object should contain the following keys: Required value : The item's data Optional expires_in : How long in seconds to keep the item in the cache before it is deleted. By default, items do not expire. Maximum is 2,592,000 seconds (30 days). replace : If set to true, only set the item if the item is already in the cache. If the item is not in the cache, do not create it. add : If set to true, only set the item if the item is not already in the cache. If the item is in the cache, do not overwrite it. cas : If set, the new item will only be placed in the cache if there is an existing item with a matching key and cas value. An item's cas value is automatically generated and is included when the item is retrieved . Request { \"value\" : \"This is my cache item.\" , \"expires_in\" : 86400 , \"replace\" : true } Response { \"msg\" : \"Stored.\" } Increment an Item's value This call increments the numeric value of an item in the cache. The amount must be a number and attempting to increment non-numeric values results in an error. Negative amounts may be passed to decrement the value. The increment is atomic, so concurrent increments will all be observed. Endpoint POST /projects/ {Project ID} /caches/ {Cache Name} /items/ {Key} /increment URL Parameters Project ID : The project the item belongs to. Cache Name : The name of the cache. If the cache does not exist, it will be created for you. Key : The key of the item to increment. Request Parameters The request body should contain the following keys: Required amount : The amount to increment the value, as an integer. If negative, the value will be decremented. Request { \"amount\" : 10 } Response { \"msg\" : \"Added\" , \"value\" : 132 } Get an Item from a Cache This call retrieves an item from the cache. The item will not be deleted. Endpoint GET /projects/ {Project ID} /caches/ {Cache Name} /items/ {Key} URL Parameters Project ID : The project the cache belongs to. Cache Name : The name of the cache the item belongs to. Key : The key the item is stored under in the cache. Response { \"cache\" : \"CACHE NAME\" , \"key\" : \"ITEM KEY\" , \"value\" : \"ITEM VALUE\" , \"cas\" : \"12345\" } Delete an Item from a Cache This call will delete the item from the cache. Endpoint DELETE /projects/ {Project ID} /caches/ {Cache Name} /items/ {Key} URL Parameters Project ID : The project the cache belongs to. Cache Name : The name of the cache the item belongs to. Key : The key the item is stored under in the cache. Response { \"msg\" : \"Deleted.\" } "
}, {
"title": "Configuring the Official Client Libraries",
"url": "/cache/reference/configuration/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Many of the client libraries make use of a global configuration scheme for all of Iron.io services. This approach lets you set and manage your tokens and project IDs in a centralized manner and make them available across all of Iron.io's services, even across workspaces. This scheme allows you to spend less time on configuration issues and more on writing",
"body": "Many of the client libraries make use of a global configuration scheme for all of Iron.io services. This approach lets you set and manage your tokens and project IDs in a centralized manner and make them available across all of Iron.io's services, even across workspaces. This scheme allows you to spend less time on configuration issues and more on writing code. It also supports the design pattern that calls for strict separation of configuration information from application code. The two most common variables used in configuration are the project ID and the token . The project ID is a unique identifier for your project and can be found in the HUD . The token is one of your OAuth2 tokens, which can be found on their own page in the HUD. Table of Contents Quick Start About the Scheme The Overall Hierarchy The Environment Variables The File Hierarchy The JSON Hierarchy Example Setting Host Example Accepted Values Quick Start Create a file called .iron.json in your home directory (i.e., ~/.iron.json ) and enter your Iron.io credentials: .iron.json { \"token\" : \"MY_TOKEN\" , \"project_id\" : \"MY_PROJECT_ID\" } The project_id you use will be the default project to use. You can always override this in your code. Alternatively, you can set the following environment variables: IRON_TOKEN = MY_TOKEN IRON_PROJECT_ID = MY_PROJECT_ID That's it, now you can get started. About the Scheme The configuration scheme consists of three hierarchies: the file hierarchy, the JSON hierarchy, and the overall hierarchy. By understanding these three hierarchies and how clients determine the final configuration values, you can build a powerful system that saves you redundant configuration while allowing edge cases. The Overall Hierarchy The overall hierarchy is simple to understand: local takes precedence over global. The configuration is constructed as follows: The global configuration file sets the defaults according to the file hierarchy. The global environment variables overwrite the global configuration file's values. The product-specific environment variables overwrite everything before them. The local configuration file overwrites everything before it according to the file hierarchy. The configuration file specified when instantiating the client library overwrites everything before it according to the file hierarchy. The arguments passed when instantiating the client library overwrite everything before them. The Client's Environment Variables set in iron.json The environment variables the scheme looks for are all of the same formula: the camel-cased product name is switched to an underscore (\"IronWorker\" becomes \"iron_worker\") and converted to be all capital letters. For the global environment variables, \"IRON\" is used by itself. The value being loaded is then joined by an underscore to the name, and again capitalised. For example, to retrieve the OAuth token, the client looks for \"IRON_TOKEN\". In the case of product-specific variables (which override global variables), it would be \"IRON_WORKER_TOKEN\" (for IronWorker). Accepted Values The configuration scheme looks for the following values: project_id : The ID of the project to use for requests. token : The OAuth token that should be used to authenticate requests. Can be found in the HUD . host : The domain name the API can be located at. Defaults to a product-specific value, but always using Amazon's cloud. protocol : The protocol that will be used to communicate with the API. Defaults to \"https\", which should be sufficient for 99% of users. port : The port to connect to the API through. Defaults to 443, which should be sufficient for 99% of users. api_version : The version of the API to connect through. Defaults to the version supported by the client. End-users should probably never change this. Note that only the project_id and token values need to be set. They do not need to be set at every level of the configuration, but they must be set at least once by the levels that are used in any given configuration. It is recommended that you specify a default project_id and token in your iron.json file. The File Hierarchy The hierarchy of files is simple enough: if a file named .iron.json exists in your home folder, that will provide the defaults. if a file named iron.json exists in the same directory as the script being run, that will be used to overwrite the values from the .iron.json file in your home folder. Any values in iron.json that are not found in .iron.json will be added; any values in .iron.json that are not found in iron.json will be left alone; any values in .iron.json that are found in iron.json will be replaced with the values in iron.json . This allows a lot of flexibility: you can specify a token that will be used globally (in .iron.json ), then specify the project ID for each project in its own iron.json file. You can set a default project ID, but overwrite it for that one project that uses a different project ID. The JSON Hierarchy Each file consists of a single JSON object, potentially with many sub-objects. The JSON hierarchy works in a similar manner to the file hierarchy: the top level provides the defaults. If the top level contains a JSON object whose key is an Iron.io service ( iron_worker , iron_mq , or iron_cache ), that will be used to overwrite those defaults when one of their clients loads the config file. This allows you to define a project ID once and have two of the services use it, but have the third use a different project ID. Example In the event that you wanted to set a token that would be used globally, you would set ~/.iron.json to look like this: .iron.json { \"token\" : \"YOUR TOKEN HERE\" } To follow this up by setting your project ID for each project, you would create an iron.json file in each project's directory: iron.json { \"project_id\" : \"PROJECT ID HERE\" } If, for one project, you want to use a different token, simply include it in that project's iron.json file: iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" } Now for that project and that project only , the new token will be used. If you want all your IronCache projects to use a different project ID, you can put that in the ~/.iron.json file: .iron.json { \"project_id\" : \"GLOBAL PROJECT ID\" , \"iron_cache\" : { \"project_id\" : \"IRONCACHE ONLY PROJECT ID\" } } If you don't want to write things to disk or on Heroku or a similar platform that has integrated with Iron.io to provide your project ID and token automatically, the library will pick them up for you automatically. Setting Host It is useful to quickly change your host in cases where your region has gone down. If want to set the Host, Post, and Protocol specifically, simply include those keys in that project's iron.json file: iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" \"port\" : 443 , \"protocol\" : \"https\" , \"host\" : \"mq-rackspace-ord.iron.io\" } "
}, {
"title": "IronCache Environment",
"url": "/cache/reference/environment/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Table of Contents Item Structure Item Constraints Item Structure Caches are key/value stores comprised of items . The Item structure is flexible and straight-forward. Items can be variable in size and can contain almost any text or data format. Item Element Type Token OAuth2 access token (string) expires_in (optional) Integer (seconds) key URL-encoded string value string Item Constraints Item Var",
"body": " Table of Contents Item Structure Item Constraints Item Structure Caches are key/value stores comprised of items . The Item structure is flexible and straight-forward. Items can be variable in size and can contain almost any text or data format. Item Element Type Token OAuth2 access token (string) expires_in (optional) Integer (seconds) key URL-encoded string value string Item Constraints Item Var Default Maximum Notes Item Size -- 1MB Includes the entire body of the request (expiration, etc.) Key -- -- Because it is part of the URL in the API request , the key must be URL encoded. Key Size 250 characters Expiration 604,800sec 2,592,000sec Equates to 7 days and 30 days, respectively. This field is optional. By default, cache items will persist forever . "
}, {
"title": "IronCache Reference",
"url": "/cache/reference/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "The IronCache reference documentation contains all the low-level information about IronCache. Every little detail has been recorded here for you. REST/HTTP API Every endpoint, every parameter of our API is at your fingertips. Environment Know exactly how your information will be stored in your caches. Configuration Everything you need to know to make the IronCache client libraries work they way",
"body": "The IronCache reference documentation contains all the low-level information about IronCache. Every little detail has been recorded here for you. REST/HTTP API Every endpoint, every parameter of our API is at your fingertips. Environment Know exactly how your information will be stored in your caches. Configuration Everything you need to know to make the IronCache client libraries work they way you want them to. Client Libraries A big list of the officially-supported client libraries, so you can use IronCache in your language-of-choice. Memcache Interface All the information you need to hook your memcache client up to IronCache's scalable backend. Something Missing? Can't find the information you need here? Our engineers are always available and will be happy to answer questions. "
}, {
"title": "Client Libraries",
"url": "/cache/reference/libraries/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Official Client Libraries These are our official client libraries that use the IronCache REST/HTTP API . Ruby PHP Python .NET Go Rails Unofficial Client Libraries These are some unofficial client libraries that use the IronCache REST/HTTP API . Node.JS - node-ironio by Andrew Hallock .NET - IronTools by Oscar Deits .NET - IronSharp by Jeremy Bell Java - IronCache by",
"body": "Official Client Libraries These are our official client libraries that use the IronCache REST/HTTP API . Ruby PHP Python .NET Go Rails Unofficial Client Libraries These are some unofficial client libraries that use the IronCache REST/HTTP API . Node.JS - node-ironio by Andrew Hallock .NET - IronTools by Oscar Deits .NET - IronSharp by Jeremy Bell Java - IronCache by Philip/mrcritical ActionScript 3.0 - IronCache-Client by Evgenios Skitsanos PHP - Codeigniter-Iron.io by jrutheiser Perl - IO::Iron by Mikko Koivunalho We will continue to add more clients for the REST/HTTP API. If you would like to see one in particular, please let us know. We're also totally supportive if you want to build or modify client libraries yourself. Feel free to jump into our live chat support for help. We love community involvement! "
}, {
"title": "Memcache Interface",
"url": "/cache/reference/memcache/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "As an industry standard, memcached has accumulated an extensive list of supported languages, so it's extremely likely your language of choice is supported. It's important to note that only the text protocol is supported in IronCache. The binary protocol for memcached is not supported at this time. Using the memcached interface does not encrypt your credentials during transport. Table of",
"body": "As an industry standard, memcached has accumulated an extensive list of supported languages, so it's extremely likely your language of choice is supported. It's important to note that only the text protocol is supported in IronCache. The binary protocol for memcached is not supported at this time. Using the memcached interface does not encrypt your credentials during transport. Table of Contents Memcache Libraries Host Information Authentication Example Install the Library Run the Example Memcache Libraries Here's a sample list of languages available (with multiple clients libs to choose from for many languages): C C++ Perl OCaml Django PHP Lisp Python Erlang Rails Go Ruby Scheme Java Io .NET/C# You can use any of the memcached clients with IronCache. Host Information To connect to IronCache using memcached, use the host below: Host Port cache-aws-us-east-1.iron.io 11211 Authentication Because IronCache requires authentication, clients must set a pseudo-item as soon as they connect. Set the \"oauth\" key to the following: {TOKEN} {PROJECT_ID} {CACHE_NAME} This will not be stored in your cache. Subsequent attempts to set the value of the \"oauth\" key, however, will be stored in the cache. Example The following example should help you get up and running using IronCache with memcached quickly: Install the Library The sample uses the memcache-client gem; you'll need to install it before you can use the sample. Note: The popular Dalli client will not work, as it requires support for the binary memcached protocol, which IronCache does not support at this time. To install memcache-client, just run the following from your command line: Command Line $ gem install memcache-client Run the Example iron cache memcache.rb require 'memcache' # connect mc = MemCache . new ( [ 'cache-aws-us-east-1.iron.io:11211' ] ) # Tokens can be retrieved from https://hud.iron.io/tokens token = \"Insert your token here\" # Project IDs are listed at https://hud.iron.io project_id = 'Insert your project_id here' cache_name = 'Give your cache a unique name' # authenticate, expiration is 0, don't use marshal serialization mc . set ( 'oauth' , \" #{ token } #{ project_id } #{ cache_name } \" , 0 , true ) # store for 5 seconds mc . set ( 'abc' , 123 , 5 ) # retrieve p mc . get ( 'abc' ) sleep 5 # and it's gone p mc . get ( 'abc' ) "
}, {
"title": "Crowd-Sourcing the Dev Center",
"url": "/community/docs/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "We're looking to the community to help us make the Dev Center the best of its kind. We want this sticker to be your sticker. So we're giving it to you. All you have to do is contribute to the Dev Center— open an issue or fork our repository , fix something and send us a pull request. Not only",
"body": "We're looking to the community to help us make the Dev Center the best of its kind. We want this sticker to be your sticker. So we're giving it to you. All you have to do is contribute to the Dev Center— open an issue or fork our repository , fix something and send us a pull request. Not only do you get a better Dev Center, you get a really sweet sticker. How's that for a win-win? For more active contributors, we have these really cool t-shirts, as well. You can hack away on your latest super-cool project in style with this awesome shirt—the very same shirts our team sports out and about! We award these for continued and significant contributions to the Dev Center, so just keep filing issues and sending pull requests, and we'll get in touch. Or we suppose you could just ask. But wait, there's more! We also send out coffee, energy drinks, and other assorted goods on occasion for extra special contributions. These can be anything from gobs of hackerfuel to limited edition Iron.io swag. Surprise us with some great work and you'll win a bunch of fans at Iron.io. "
}, {
"title": "Community Events",
"url": "/community/events/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "We love our community of developers, so we do our best to maintain a presence in the community. We run GoSF and SFRails , user groups in San Francisco for our favourite languages. We also try to be present at as many hackathons and conferences as we can. You can use this page to keep track of where we'll be.",
"body": "We love our community of developers, so we do our best to maintain a presence in the community. We run GoSF and SFRails , user groups in San Francisco for our favourite languages. We also try to be present at as many hackathons and conferences as we can. You can use this page to keep track of where we'll be. Upcoming Events We list the next ten events we'll be at here. Feel free to check back, or add the Google Calendar to your calendar. We're loading the calendar. Hold tight! function truncate(string) { string = $.trim(string); newline = string.indexOf(\"\\n\"); period = string.indexOf(\". \"); if(newline == -1 && period == -1) { return string; } else if(newline > period && period > 0) { return string.substring(0, period + 1); } else { return string.substring(0, newline); } } function addItem(title, link, description, time) { if(title == null || link == null || time == null) { return } date = new Date(time); newevent = \"<li><strong>\" + (date.getMonth() + 1) + \"/\" + date.getDate() + \"</strong> <a href=\\\"\" + link + \"\\\" title=\\\"View on Google Calendar\\\">\" + title + \"</a>\"; if(description != null && truncate(description).length > 0) { newevent += \": \" + truncate(description); if(truncate(description).length < description.length && truncate(description).length > 0) { newevent += \" <a href=\\\"\" + link + \"\\\" title=\\\"View on Google Calendar\\\">[more]</a>\"; } } newevent += \"</li>\"; $(\"#events\").append(newevent); } function OnLoadCallback() { gapi.client.setApiKey(\"AIzaSyAUmHCXg0cnmgB7Yk_pU-LyXOHMm2sCKMo\"); gapi.client.load(\"calendar\", \"v3\", function() { request = gapi.client.request({ 'path': '/calendar/v3/calendars/iron.io_g9k00qdk4cnfbsk5c353ois9ok%40group.calendar.google.com/events', 'params': { \"alwaysIncludeEmail\": false, \"maxResults\": 10, \"orderBy\": \"startTime\", \"singleEvents\": true, \"timeMin\": (new Date()).toISOString(), \"fields\": \"items(description,htmlLink,location,start,summary)\" } }); request.execute(function(resp) { for(var i=0; i < resp.items.length; i++) { addItem(resp.items[i].summary, resp.items[i].htmlLink, resp.items[i].description, resp.items[i].start.dateTime); } $(\"#events\").css(\"display\", \"\"); $(\"#loading_message\").hide(); }); }); } "
}, {
"title": "Iron.io Community",
"url": "/community/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "At Iron.io, we're developers at heart, so we take pride in creating services that let other developers do great things. That requires a great developer experience, and while we work hard at that, we'd love your help. Whether you're running your first worker or are an expert at building beautiful systems of connected processes, we want your feedback, good, bad,",
"body": "At Iron.io, we're developers at heart, so we take pride in creating services that let other developers do great things. That requires a great developer experience, and while we work hard at that, we'd love your help. Whether you're running your first worker or are an expert at building beautiful systems of connected processes, we want your feedback, good, bad, or otherwise. We also love any and all contributions to our docs, client libraries, and our examples. Talk To Us We keep an active presence on Twitter as @getiron , on Google+ as +Iron.io , and via chat at get.iron.io/chat . Follow us, circle us, tweet us, or chat us up. We'd love to hear from you. We also monitor the iron.io , ironworker , ironmq , and ironcache tags on StackOverflow . If you're running into problems, feel free to post there, and one of our engineers will answer the question. If you have some free time and want to help others (and establish yourself as an expert in the community!) you can follow those tags and answer questions, too. Help Us Help Others We've set up a swag reward program and open-sourced our Dev Center to encourage community members to help out. File an issue or send a pull request on the Dev Center and help us make it even better, and get some swag or service credits for your trouble. Help us answer questions on StackOverflow, and get swag or service credits as a sign of our gratitude. Who doesn't like free stuff? "
}, {
"title": "Iron.io Frequently Asked Questions",
"url": "/faq/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "This a compilation of helpful answers to frequently asked questions. If you don't see your question on here fear not! as we have a highly active 24/7 public support channel at get.iron.io/chat The Iron Team also encourages you to contribute to our documentation. Iron Heads Up Display How do I share my Project/Account with others? IronMQ What is IronMQ? What",
"body": "This a compilation of helpful answers to frequently asked questions. If you don't see your question on here fear not! as we have a highly active 24/7 public support channel at get.iron.io/chat The Iron Team also encourages you to contribute to our documentation. Iron Heads Up Display How do I share my Project/Account with others? IronMQ What is IronMQ? What can I do with IronMQ? How do I get started with IronMQ? What resources are available on IronMQ? What are the benefits of IronMQ vs self-managed message queues? What are the benefits of IronMQ vs RabbitMQ? What are the benefits of IronMQ vs SQS? What happens to messages if no clients pull them off after put? Why are GETs and DELETEs separate actions? What is a push queue? What is the difference between unicast and multicast? Is the retry limit on unicast based on individual push attempt? Can a queue be a push queue and a pull queue at the same time? What security measures are used within IronMQ? IronWorker What is IronWorker? What client libraries are available for IronWorker? What are the advantages of IronWorker vs Heroku Workers, Celery, or Resque? I'm getting this error \"zip/zip (LoadError)\" IronCache What is IronCache? How do I share my Project/Account? Sharing your Project by simply clicking on the share icon in our heads up display. If the user has an account they will immediately see your project otherwise a invite will be sent our asking them to join yours. What is IronMQ? IronMQ is an elastic message queue created specifically with the cloud in mind. It’s easy to use, runs on industrial-strength cloud infrastructure, and offers developers ready-to-use messaging with highly reliable delivery options and cloud-optimized performance. What can I do with IronMQ? A messaging layer is key to creating reliable and scalable distributed systems. It lets you orchestrate and manage the volume of messages and events that flow within your application and between other applications, sites, and services. IronMQ is a cloud-based solution that eliminates any setup or maintenance and provides work dispatch, load buffering, synchronicity, database offloading, and many other core needs for scalable cloud applications. How do I get started with IronMQ? Users can get up and running in a few minutes. Just sign up and get an auth token, and then you can send and receive messages on one or more queues. It’s that simple. IronMQ has a generous free plan with no credit card required and so it’s easy to build in message queuing right from the start. What resources are available on IronMQ? See our developer section for information on how IronMQ works, API reference guide and other technical details. There are also a growing list of client libraries and framework integrations. These cover most common languages as well as frameworks such as Celery, YII, Laravel, DelayedJob and others. What are the benefits of IronMQ vs self-managed message queues? Cloud services provide many advantages over standing up software on self-managed servers. Primary ones include reduced complexity, greater speed to market, and increased reliability and scalability. It’s reasonably easy to stand up an open-source message queuing solution on a single virtual server but it’s exceedingly difficult to make a queuing system highly scalable and reliable. It takes multiple instances across zones or regions and redundancy within every layer and component including load balancing and the persistence layer. Robust logging and introspection tools add additional complexity. Multiply this across multiple environments, applications, systems, and business units, and the task of operating self-managed queues becomes almost impossible. What are the benefits of IronMQ vs RabbitMQ? RabbitMQ is an open-source packaged based on the AMQP protocol. It’s a strong messaging standard that has a lot of backing and inertia behind it. Unfortunately, it’s built for a different time – one that is pre-cloud and behind the firewall. It requires a lot of work to scale and make redundant and is more complex than most developers need. IronMQ is based on HTTP, takes JSON packages, and uses OAuth for authentication – all protocols and standards that are well-known to cloud developers. AMQP is a separate application layer protocol that is different than the one developers are used to using on a daily basis. AMQP also uses a less common default socket as part of the transport layer. Whereas certain cloud application hosts don’t allow most socket connections from within their sandbox, they do allow HTTP requests. HTTP and HTTPS are always open on most enterprise firewalls, but special ports may not always be. Everyone can easily speak HTTP, but it takes special effort to speak AMQP. This greatly limits the environments into which AMQP can be deployed. For more information on the differences between IronMQ and RabbitMQ, please see the comparison matrix on the website. What are the benefits of IronMQ vs SQS? For more information on the differences between IronMQ and SQS, please see the comparison matrix on the website. What happens to messages if no clients pull them off after putting them in the queue? Messages will persist in the queue until a receiver takes them off. Why are GETs and DELETEs separate actions? Receiving a message (GET) and deleting a message (DELETE) are separate actions because it provides a reliable paradigm for processing messages. An explicit delete protects messages from being only partially processed. If the receiving process dies or encounters an error, the message will be automatically put back on the queue once the timeout is reached. Note that with push queues, messages are deleted from the queue once a successful push takes place. (Messages can be retried multiple times.) What is a push queue? A push queue is a queue that automatically pushes messages to endpoints. These endpoints can be HTTP/REST endpoints, IronMQ endpoints (in the form of a webhook), or an IronWorker endpoint (also in the form of a webhook). After a succesful push, messages are automatically deleted from the message queue. What is the difference between unicast and multicast? Unicast is a routing pattern that will cycle through the endpoints pushing to one endpoint after another until a success push occurs. Multicast is a routing pattern that will push the messages to all the subscribers. Is the retry limit on unicast based on individual push attempts or based on the number of cycles of pushes? It is transient so events and the associated data only exists until it has been delivered to all connected clients. If there are no clients subscribed to the channel that the message has been triggered on then that event is instantly lost. At present, we do not persist messages beyond the timeout value (default or user-set). Can a queue be a push queue and a pull queue at the same time? No. Queues are either on or the other (messages don’t last long on a push queue). You can switch from a pull queue to a push queue and vice versa at any point. (For example, to turn a push queue into a pull queue,you would just send push_type : pull.) Messages on a push queue at the time of a change will remain on the queue. You can also make a pull queue a subscriber of a push queue – either statically or by adding/deleting subscribers dynamically. (Just add the webhook endpoint for the pull queue as a subscriber for the push queue.) What security measures are used within IronMQ? Iron.io services run on top of industrial-strength clouds such as AWS and Rackspace and so we inherit many of their security measures and certifications that these clouds offer regarding VM security, network security, and physical security. Strong authentication using OAuth is provided to ensure that Iron.io accounts, services, and projects are secured against unauthorized access. Only account owners and accounts that queues have been shared with can access the queues, workers, and caches they create. We use SSL to protect data in transit and provide a high levels of security for the data within the system. We have a number of customers using us for transactional data and believe we offer a secure, reliable solution for cloud messaging. We do, however, recommend that for especially sensitive data, clients do client-level encryption of data payloads so that it has added protection even when data is at rest. We’re happy to discuss these measures as well as custom plans that can address areas that include SLAs, architectural help and enhanced support, and custom data retention options. What happens once the set number of api requests, compute hours, or data volume in a plan isreached? It’s up to you. Service can continue seamlessly on a usage-based rates or you can set a hard stop at the plan amount. The default is unlimited usage. See each service for the usage-based rates. What happens if you turn the default off and hit the plan amount? If you turn the default off and hit the plan limit, then subsequent API requests will return errors. (Specifically, the services will return an HTTP status code of 403.) Can you adjust the plan limits? You can either increase or decrease your plans at will. You can also turn on unlimited usage and you'll be billed at usage-based rates for amounts over the plan. Will you be notified as you reach or exceed the plan amounts? Yes. You’ll be notified on a regular basis of your usage as well as if you get close and/or reach the plan amounts. Note that you can switch plans at any time. Can I pay at usage-based rates? If you’re a heavy user and have specific needs, please let us know and we’d be happy to work with you to customize a plan that fits your needs. Contact us for more details. What is IronWorker? An easy-to-use scalable task queue that gives cloud developers a simple way to offload front-end tasks, run scheduled jobs, and process tasks in the background and at scale. What are the advantages of IronWorker vs Heroku Workers, Celery, or Resque? Our workers give you a wide range of flexibility and scalability that other services can't match. We support a wide range of languages. Python, Ruby, PHP, .NET, Java, Clojure, Node.js, Go, and binary code! Have a up to the second reporting and analytics though our Heads Up Display (HUD). By not having to manage your own servers, queues, and schedulers, we allow you scale accordingly to your needs, only processing time is counted. See our Comparison Chart What client libraries are available for IronWorker? We current have 6 official client libraries in Ruby, PHP, Python, Java, Node.js, and Go, as well as a unofficial client library for .Net. Ruby: http://github.com/iron-io/iron_worker_ruby_ng PHP: http://github.com/iron-io/iron_worker_php Python: http://github.com/iron-io/iron_worker_python Java: http://github.com/iron-io/iron_worker_java Node: http://github.com/iron-io/iron_worker_node Go: http://github.com/iron-io/iron_go .NET(unofficial): http://github.com/odeits/IronTools\" Help I'm getting an error \"zip/zip (LoadError)\" error:/usr/local/lib/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- zip/zip (LoadError) You are likely using a old version of the iron_worker_ng gem. please update by running Command Line $ gem update iron_worker_ng your gem version should be 1.0.2 later. Check this running the following command. $ iron_worker -v link: stackoverflow resource to this error. What is IronCache? IronCache is an elastic and durable key/value store that’s perfect for applications that need to share state, pass data, and coordinate activity between processes and devices. Reduce database load by making use of a high-performance middle tier for asynchronous processing and communication. We keep an active presence on Twitter as @getiron , on Google+ as +Iron.io , and via chat at get.iron.io/chat . Follow us, circle us, tweet us, or chat us up. We'd love to hear from you. "
}, {
"title": null,
"url": "/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "IronMQ is a high performance hosted message queue that lets you pass messages and events between processes and other systems. Process more things independently and asynchronously without ever touching a server. Select IronWorker is a massively parallel multi-language worker platform. From crawling the web to delivering notifications at scale, you can run thousands of tasks in parallel and schedule jobs",
"body": " IronMQ is a high performance hosted message queue that lets you pass messages and events between processes and other systems. Process more things independently and asynchronously without ever touching a server. Select IronWorker is a massively parallel multi-language worker platform. From crawling the web to delivering notifications at scale, you can run thousands of tasks in parallel and schedule jobs easily from within your applications. Select A key/value store in the cloud, IronCache allows you to define caches that you can store and retrieve values from. Built on industry standards, it makes building out scalable, robust storage simple. Select "
}, {
"title": "IronMQ Documentation",
"url": "/mq/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "1 Configure Your Client 2 Post a Message to the Queue 3 Get a Message off the Queue 4 Delete a Message from the Queue The Post/Get/Delete Paradigm IronMQ was designed to be fault-tolerant while still maintaining an only-delivered-once promise. It accomplishes this through a special Post/Get/Delete paradigm for messages. Essentially, messages are posted to a queue. Clients then get",
"body": " 1 Configure Your Client 2 Post a Message to the Queue 3 Get a Message off the Queue 4 Delete a Message from the Queue The Post/Get/Delete Paradigm IronMQ was designed to be fault-tolerant while still maintaining an only-delivered-once promise. It accomplishes this through a special Post/Get/Delete paradigm for messages. Essentially, messages are posted to a queue. Clients then get the messages off the queue; each get \"reserves\" the message for a configurable amount of time— the default is one minute —after which the message is returned to the queue. While the client has the message reserved, it should complete its operation, then delete the message from the queue. This paradigm ensures that failures while processing a message simply return the message to the queue to be reprocessed and that only one client will ever be processing a message at any given point. Assuming the client deletes the message (as it should), the message will only ever be processed once. 1. Get a project ID and auth token You can retrieve your project ID and token from the HUD by clicking on a project then clicking the little key icon. 2. Post a Message to a Queue All the Iron.io APIs are REST-based with JSON bodies and use OAuth2 for authentication. Here's an example HTTP request for posting to a queue: POST https://mq-aws-us-east-1.iron.io:443/1/projects/{PROJECT_ID}/queues/test_queue/messages Request Headers Authorization: OAuth {TOKEN} Content-Type: application/json Body { \"messages\" : [{ \"body\" : \"hello world!\" }]} Response { \"ids\" : [ \"5824513078343549739\" ], \"msg\" : \"Messages put on queue.\" } Curl Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . curl -i -H \"Content-Type: application/json\" -H \"Authorization: OAuth {TOKEN}\" -X POST -d '{\"messages\":[{\"body\":\"hello world!\"}]}' \"https://mq-aws-us-east-1.iron.io/1/projects/{PROJECT_ID}/queues/test_queue/messages\" Ruby Example Make sure you've set up your configuration file . @ironmq = IronMQ :: Client . new () @queue = @ironmq . queue ( \"test_queue\" ) @queue . post ( \"hello world!\" ) PHP Example Make sure you've set up your configuration file . <?php $ironmq = new IronMQ (); $ironmq -> postMessage ( \"test_queue\" , \"Hello world!\" ); Python Example Make sure you've set up your configuration file . ironmq = IronMQ () queue = ironmq . queue ( \"test_queue\" ) queue . post ( \"hello world!\" ) Node.js Example Make sure you've set up your configuration file . var iron_mq = require ( 'iron_mq' ); var imq = new iron_mq . Client (); var queue = imq . queue ( \"test_queue\" ); queue . post ( \"Hello, IronMQ!\" , function ( error , body ) { console . log ( error , body ); }); Go Example Make sure you've set up your configuration file . package main import ( \"fmt\" \"github.com/iron-io/iron_go/mq\" ) func main () { queue := mq . New ( \"hello_queue\" ) id , err := queue . PushString ( \"Hello, world!\" ) fmt . Println ( id , err ) } Java Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . Client client = new Client ( \"{PROJECT_ID}\" , \"{TOKEN}\" , Cloud . IronAWSUSEast ); Queue queue = client . queue ( \"test_queue\" ); queue . push ( \"Hello world!\" ); Clojure Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . ( def client ( mq/create-client \"{TOKEN}\" \"{PROJECT_ID}\" )) ( mq/post-message client \"test_queue\" \"Hello world!\" ) .NET Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . Client client = new Client ( \"{PROJECT_ID}\" , \"{TOKEN}\" ); Queue queue = client . queue ( \"test_queue\" ); queue . push ( \"Hello world!\" ); 3. Get a Message off the Queue Getting a message off the queue is simple: GET https://mq-aws-us-east-1.iron.io:443/1/projects/ { PROJECT_ID } /queues/test_queue/messages Request Headers Authorization: OAuth {TOKEN} Content-Type: application/json Response { \"messages\" : [{ \"id\" : \"5824513078343549739\" , \"body\" : \"hello\" , \"timeout\" : 60 }]} Curl Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . curl -i -H \"Content-Type: application/json\" -H \"Authorization: OAuth {TOKEN}\" \"https://mq-aws-us-east-1.iron.io/1/projects/{PROJECT_ID}/queues/test_queue/messages\" Ruby Example Make sure you've set up your configuration file . @ironmq = IronMQ :: Client . new () @queue = @ironmq . queue ( \"test_queue\" ) msg = @queue . get () PHP Example Make sure you've set up your configuration file . <?php $ironmq = new IronMQ (); $ironmq -> getMessage ( \"test_queue\" ); Python Example Make sure you've set up your configuration file . ironmq = IronMQ () queue = ironmq . queue ( \"test_queue\" ) msg = queue . get () Node.js Example Make sure you've set up your configuration file . var iron_mq = require ( 'iron_mq' ); var imq = new iron_mq . Client (); var queue = imq . queue ( \"test_queue\" ); queue . get ({ n : 1 }, function ( error , body ) { console . log ( error , body ); }); Go Example Make sure you've set up your configuration file . package main import ( \"fmt\" \"github.com/iron-io/iron_go/mq\" ) func main () { queue := mq . New ( \"hello_queue\" ) msg , err := queue . Get () fmt . Println ( msg , err ) } Java Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . Client client = new Client ( \"{PROJECT_ID}\" , \"{TOKEN}\" , Cloud . IronAWSUSEast ); Queue queue = client . queue ( \"test_queue\" ); Message msg = queue . get (); Clojure Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . ( def client ( mq/create-client \"{TOKEN}\" \"{PROJECT_ID}\" )) ( let [ msg ( mq/get-message client \"test_queue\" )]) .NET Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . Client client = new Client ( \"{PROJECT_ID}\" , \"{TOKEN}\" ); Queue queue = client . queue ( \"test_queue\" ); Message msg = queue . get (); 4. Delete a Message from the Queue Once you've gotten a message off the queue and have processed it, you need to delete the message from the queue. This ensures that the message is only processed once, but that it will not be lost if the processor fails during processing. Deleting is as simple as posting and getting: DELETE https://mq-aws-us-east-1.iron.io:443/1/projects/ { PROJECT_ID } /queues/test_queue/messages/ { MESSAGE_ID } Request Headers Authorization: OAuth {TOKEN} Content-Type: application/json Response { \"msg\" : \"Deleted\" } Curl Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . curl -i -H \"Content-Type: application/json\" -H \"Authorization: OAuth {TOKEN}\" -X DELETE \"https://mq-aws-us-east-1.iron.io/1/projects/{PROJECT_ID}/queues/test_queue/messages/{MESSAGE_ID}\" Ruby Example Make sure you've set up your configuration file . @ironmq = IronMQ :: Client . new () @queue = @ironmq . queue ( \"test_queue\" ) msg = @queue . get () msg . delete PHP Example Make sure you've set up your configuration file . $ironmq = new IronMQ(); $msg = $ironmq->getMessage(\"test_queue\"); $ironmq->deleteMessage($msg->id); Python Example Make sure you've set up your configuration file . ironmq = IronMQ () queue = ironmq . queue ( \"test_queue\" ) response = queue . get () queue . delete ( response [ \"messages\" ][ 0 ][ \"id\" ]) Node.js Example Make sure you've set up your configuration file . var iron_mq = require ( 'iron_mq' ); var imq = new iron_mq . Client (); var queue = imq . queue ( \"test_queue\" ); queue . post ( \"Hello, IronMQ!\" , function ( error , body ) { console . log ( error , body ); }); var message_id ; queue . get ({ n : 1 }, function ( error , body ) { console . log ( error , body ); if ( error == null ) { message_id = body . id ; } }); queue . del ( message_id , function ( error , body ) { console . log ( error , body ); }); Go Example Make sure you've set up your configuration file . package main import ( \"fmt\" \"github.com/iron-io/iron_go/mq\" ) func main () { queue := mq . New ( \"hello_queue\" ) ids , err := queue . PushStrings ( \"Hello\" , \"world\" , \"!\" ) fmt . Println ( ids , err ) msg , err := queue . Get () fmt . Println ( msg , err ) if err == nil { m . Delete () } msgs , err := queue . Get ( 2 ) fmt . Println ( msgs , err ) if err != nil { for _ , m := range append ( msgs , msg ) { m . Delete () } } } Java Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . Client client = new Client ( \"{PROJECT_ID}\" , \"{TOKEN}\" , Cloud . IronAWSUSEast ); Queue queue = client . queue ( \"test_queue\" ); Message msg = queue . get (); queue . deleteMessage ( msg ); Clojure Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . ( def client ( mq/create-client \"{TOKEN}\" \"{PROJECT_ID}\" )) ( let [ msg ( mq/get-message client \"test_queue\" )] ( mq.delete-message client \"test_queue\" msg )) .NET Example Replace {TOKEN} and {PROJECT_ID} with your credentials obtained from HUD . Client client = new Client ( \"{PROJECT_ID}\" , \"{TOKEN}\" ); Queue queue = client . queue ( \"test_queue\" ); Message msg = queue . get (); queue . deleteMessage ( msg ); Next Steps You should be well-grounded in the post/get/delete paradigm now—now you need to build something cool! To get up and running quickly, you may want to look into our Beanstalk support. Check out our reference material to explore the boundaries of IronMQ's system. If you need ideas for what you can accomplish with IronMQ, you may want to take a look at our solutions . "
}, {
"title": "Bernard on IronMQ",
"url": "/mq/integrations/bernard/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "You can use IronMQ as a highly available message broker for Bernard. It's easy: Add bernard/bernard to your composer.json file. Configure Bernard to use the correct driver which is explained here . That's pretty much it, now use Bernard as normal! More info at: bernardphp.com",
"body": "You can use IronMQ as a highly available message broker for Bernard. It's easy: Add bernard/bernard to your composer.json file. Configure Bernard to use the correct driver which is explained here . That's pretty much it, now use Bernard as normal! More info at: bernardphp.com "
}, {
"title": "Celery on IronMQ",
"url": "/mq/integrations/celery/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Celery is a task queue for Python. Originally developed as part of the Django framework, it was split off into its own project and has quickly become the standard for task processing in Python. Celery supports multiple technologies for its queue broker including RabbitMQ, Redis, and IronMQ. There are many advantages to choosing IronMQ over the others. To name a",
"body": "Celery is a task queue for Python. Originally developed as part of the Django framework, it was split off into its own project and has quickly become the standard for task processing in Python. Celery supports multiple technologies for its queue broker including RabbitMQ, Redis, and IronMQ. There are many advantages to choosing IronMQ over the others. To name a few: Instant high availability No servers, maintenance, or scaling to worry about Greater job visibility with IronMQ dashboards For more information, visit Iron.io/celery Getting Started Celery was designed to easily change your broker which makes changing to IronMQ as easy as 1-2-3. Install iron_celery: pip install iron_celery add import iron_celery set BROKER_URL = 'ironmq://project_id:token@' We expand on these steps in the integration libary docs . Further Reading iron_celery Module: https://github.com/iron-io/iron_celery Getting Started Blog Post: https://blog.iron.io/using-ironmq-as-celery-broker Getting Started Video (2 mins): Iron.io/celery Celery on Heroku: https://github.com/iron-io/heroku-iron-celery-demo "
}, {
"title": "Delayed Job on IronMQ",
"url": "/mq/integrations/delayed_job/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "You can use IronMQ as a highly available message broker for Delayed Job. It's easy: Add gem to your Gemfile: gem 'delayed_job_ironmq' Add an iron.json file to the root of your Rails project, see: Configuration That's pretty much it, now use Delayed Job as normal! More info at: https://github.com/iron-io/delayed job ironmq",
"body": "You can use IronMQ as a highly available message broker for Delayed Job. It's easy: Add gem to your Gemfile: gem 'delayed_job_ironmq' Add an iron.json file to the root of your Rails project, see: Configuration That's pretty much it, now use Delayed Job as normal! More info at: https://github.com/iron-io/delayed job ironmq "
}, {
"title": "Worker Integrations",
"url": "/mq/integrations/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Here are the worker system integrations for IronMQ: Celery for Python Delayed Job for Ruby Bernard for PHP Contributing To add an integration to this list, just fork our Dev Center repository , add your integration to this page, then submit a pull request.",
"body": "Here are the worker system integrations for IronMQ: Celery for Python Delayed Job for Ruby Bernard for PHP Contributing To add an integration to this list, just fork our Dev Center repository , add your integration to this page, then submit a pull request. "
}, {
"title": "Other Integrations",
"url": "/mq/integrations/other/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "IronMQ is The Message Queue for the Cloud, and have helpful libraries to help with integration into many frameworks by our users. Delayed Job for Rails Zend Framework Celery for Python Yii Framework Laravel Framework Drupal .NET Framework Magento: coming soon Apache Camel: Apache Camel Feel free to add to this list by checking out our docs and submit a",
"body": "IronMQ is The Message Queue for the Cloud, and have helpful libraries to help with integration into many frameworks by our users. Delayed Job for Rails Zend Framework Celery for Python Yii Framework Laravel Framework Drupal .NET Framework Magento: coming soon Apache Camel: Apache Camel Feel free to add to this list by checking out our docs and submit a pull request to the docs. "
}, {
"title": "IronMQ Client Libraries",
"url": "/mq/libraries/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Official Client Libraries These are our official client libraries that use the IronMQ REST/HTTP API . Ruby Go Java Python PHP Node.JS Clojure .Net Community Supported Client Libraries These are some unofficial client libraries that use the IronMQ REST/HTTP API . Node.JS - node-ironio - IronMQ by Andrew Hallock .NET - Blacksmith by Khalid Abuhakmeh .NET - IronTools by Oscar",
"body": "Official Client Libraries These are our official client libraries that use the IronMQ REST/HTTP API . Ruby Go Java Python PHP Node.JS Clojure .Net Community Supported Client Libraries These are some unofficial client libraries that use the IronMQ REST/HTTP API . Node.JS - node-ironio - IronMQ by Andrew Hallock .NET - Blacksmith by Khalid Abuhakmeh .NET - IronTools by Oscar Deits .NET - Rest4Net.IronMq by Acropolium Studio .NET - IronSharp by Jeremy Bell PHP - BBQ - Queue Abstraction Layer by Ville Matilla PHP - Codeigniter-Iron.io by jrutheiser IronMQ Spring Integration by Trevor Shick Perl - IO::Iron by Mikko Koivunalho We will continue to add more clients for the REST/HTTP API. If you would like to see one in particular, please let us know. We're also totally supportive if you want to build or modify client libraries yourself. Feel free to jump into our live chat support for help. We love community involvement! "
}, {
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "IronMQ provides a REST/HTTP API to allow you to interact programmatically with your queues on IronMQ. Table of Contents Endpoints Authentication Requests Base URL Pagination Responses Status Codes Exponential Backoff Endpoints URL HTTP Verb Purpose /projects/ {Project ID} /queues GET List Message Queues /projects/ {Project ID} /queues/ {Queue Name} GET Get Info About a Message Queue /projects/ {Project ID} /queues/",
"body": "IronMQ provides a REST/HTTP API to allow you to interact programmatically with your queues on IronMQ. Table of Contents Endpoints Authentication Requests Base URL Pagination Responses Status Codes Exponential Backoff Endpoints URL HTTP Verb Purpose /projects/ {Project ID} /queues GET List Message Queues /projects/ {Project ID} /queues/ {Queue Name} GET Get Info About a Message Queue /projects/ {Project ID} /queues/ {Queue Name} POST Update a Message Queue /projects/ {Project ID} /queues/ {Queue Name} DELETE Delete a Message Queue /projects/ {Project ID} /queues/ {Queue Name} /clear POST Clear All Messages from a Queue /projects/ {Project ID} /queues/ {Queue Name} /messages POST Add Messages to a Queue /projects/ {Project ID} /queues/ {Queue Name} /messages/webhook POST Add Messages to a Queue via Webhook /projects/ {Project ID} /queues/ {Queue Name} /messages GET Get Messages from a Queue /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} GET Get Message by ID /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} DELETE Delete a Message from a Queue /projects/ {Project ID} /queues/ {Queue Name} /messages DELETE Delete Multiple Messages from a Queue /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} /touch POST Touch a Message on a Queue /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} /release POST Release a Message on a Queue Related to Pull Queues URL HTTP Verb Purpose /projects/ {Project ID} /queues/ {Queue Name} /alerts POST Add Alerts to a Queue /projects/ {Project ID} /queues/ {Queue Name} /alerts PUT Replace Alerts on a Queue /projects/ {Project ID} /queues/ {Queue Name} /alerts DELETE Remove Alerts from a Queue /projects/ {Project ID} /queues/ {Queue Name} /alerts/ {Alert ID} DELETE Remove Alert from a Queue by ID Related to Push Queues URL HTTP Verb Purpose /projects/ {Project ID} /queues/ {Queue Name} /subscribers POST Add Subscribers to a Queue /projects/ {Project ID} /queues/ {Queue Name} /subscribers DELETE Remove Subscribers from a Queue /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} /subscribers GET Get Push Status for a Message /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} /subscribers/ {Subscriber ID} DELETE Delete Push Message for a Subscriber Authentication IronMQ uses OAuth2 tokens to authenticate API requests. All methods require authentication unless specified otherwise. You can find and create your API tokens in the HUD . To authenticate your request, you should include a token in the Authorization header for your request or in your query parameters. Tokens are universal, and can be used across services. Note that each request also requires a Project ID to specify which project the action will be performed on. You can find your Project IDs in the HUD . Project IDs are also universal, so they can be used across services as well. Example Authorization Header : Authorization: OAuth abc4c7c627376858 Example Query with Parameters : GET https:// mq-aws-us-east-1 .iron.io/1/projects/ {Project ID} /queues?oauth=abc4c7c627376858 Notes: Be sure you have the correct case, it's OAuth , not Oauth. In URL parameter form, this will be represented as: ?oauth=abc4c7c627376858 Requests Requests to the API are simple HTTP requests against the API endpoints. All request bodies should be in JSON format, with Content-Type of application/json . Base URL All endpoints should be prefixed with the following: https:// {Host} .iron.io/ {API Version} / API Version Support : IronMQ API supports version 1 The hosts for the clouds IronMQ supports are as follows: Cloud Host AWS US-EAST mq-aws-us-east-1.iron.io AWS EU-WEST mq-aws-eu-west-1.iron.io Rackspace ORD mq-rackspace-ord.iron.io Rackspace LON mq-rackspace-lon.iron.io Rackspace DFW Pro Plans Only - Email Support Alternative domains can be found here in case of dns failures . Pagination For endpoints that return lists/arrays of values: page - The page of results to return. Default is 0. Maximum is 100. per_page - The number of results to return. It may be less if there aren't enough results. Default is 30. Maximum is 100. Responses All responses are in JSON, with Content-Type of application/json . A response is structured as follows: { \"msg\" : \"some success or error message\" } Status Codes The success failure for request is indicated by an HTTP status code. A 2xx status code indicates success, whereas a 4xx status code indicates an error. Code Status 200 OK: Successful GET 201 Created: Successful POST 400 Bad Request: Invalid JSON (can't be parsed or has wrong types). 401 Unauthorized: The OAuth token is either not provided or invalid. 403 Project suspected, resource limits. 404 Not Found: The resource, project, or endpoint being requested doesn’t exist. 405 Invalid HTTP method: A GET, POST, DELETE, or PUT was sent to an endpoint that doesn’t support that particular verb. 406 Not Acceptable: Required fields are missing. 503 Service Unavailable. Clients should implement exponential backoff to retry the request. Specific endpoints may provide other errors in other situations. When there's an error, the response body contains a JSON object something like: { \"msg\" : \"reason for error\" } Exponential Backoff When a 503 error code is returned, it signifies that the server is currently unavailable. This means there was a problem processing the request on the server-side; it makes no comment on the validity of the request. Libraries and clients should use exponential backoff when confronted with a 503 error, retrying their request with increasing delays until it succeeds or a maximum number of retries (configured by the client) has been reached. List Message Queues Get a list of all queues in a project. By default, 30 queues are listed at a time. To see more, use the page parameter or the per_page parameter. Up to 100 queues may be listed on a single page. Endpoint GET /projects/ {Project ID} /queues URL Parameters Project ID : Project these queues belong to Optional URL Parameters page : The 0-based page to view. The default is 0. per_page : The number of queues to return per page. The default is 30, the maximum is 100. Response [ { \"id\" : \"1234567890abcdef12345678\" , \"project_id\" : \"1234567890abcdef12345678\" , \"name\" : \"queue name\" } ] Get Info About a Message Queue This call gets general information about the queue. Endpoint GET /projects/ {Project ID} /queues/ {Queue Name} URL Parameters Project ID : Project the queue belongs to Queue Name : Name of the queue Response { \"size\" : \"queue size\" } Delete a Message Queue This call deletes a message queue and all its messages. Endpoint DELETE /projects/ {Project ID} /queues/ {Queue Name} URL Parameters Project ID : Project the queue belongs to Queue Name : Name of the queue Response { \"msg\" : \"Deleted.\" } Update a Message Queue This allows you to change the properties of a queue including setting subscribers and the push type if you want it to be a push queue. Endpoint POST /projects/ {Project ID} /queues/ {Queue Name} URL Parameters Project ID : Project the queue belongs to Queue Name : Name of the queue. If the queue does not exist, it will be created for you. WARNING: Do not use the following RFC 3986 Reserved Characters within your in the naming of your queues. ! * ' ( ) ; : @ & = + $ , / ? # [ ] Body Parameters Optional The following parameters are all related to Push Queues. subscribers : An array of subscriber hashes containing a required \"url\" field and an optional \"headers\" map for custom headers. This set of subscribers will replace the existing subscribers. See Push Queues to learn more about types of subscribers. To add or remove subscribers, see the add subscribers endpoint or the remove subscribers endpoint . The maximum is 64kb for JSON array of subscribers' hashes. See below for example JSON. push_type : Either multicast to push to all subscribers or unicast to push to one and only one subscriber. Default is multicast . To revert push queue to reqular pull queue set pull . retries : How many times to retry on failure. Default is 3. Maximum is 100. retries_delay : Delay between each retry in seconds. Default is 60. error_queue : The name of another queue where information about messages that can't be delivered after retrying retries number of times will be placed. Pass in an empty string to disable error queues. Default is disabled. See Push Queues to learn more. Request { \"push_type\" : \"multicast\" , \"subscribers\" : [ { \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_1\" }, { \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_2\" , \"headers\" : { \"Content-Type\" : \"application/json\" } } ] } Response { \"id\" : \"50eb546d3264140e8638a7e5\" , \"name\" : \"pushq-demo-1\" , \"size\" : 7 , \"total_messages\" : 7 , \"project_id\" : \"4fd2729368a0197d1102056b\" , \"retries\" : 3 , \"push_type\" : \"multicast\" , \"retries_delay\" : 60 , \"subscribers\" : [ { \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_1\" }, { \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_2\" , \"headers\" : { \"Content-Type\" : \"application/json\" }} ] } Add Alerts to a Queue Add alerts to a queue. This is for Pull Queue only. POST /projects/ {Project ID} /queues/ {Queue Name} /alerts/ Optional alerts : An array of alert hashes containing required \"type\", \"queue\", \"trigger\", and optional \"direction\", \"snooze\" fields. Maximum number of alerts is 5. See Queue Alerts to learn more. Request { \"alerts\" : [ { \"type\" : \"fixed\" , \"direction\" : \"asc\" , \"trigger\" : 1000 , \"queue\" : \"my_queue_for_alerts\" } ] } Response { \"msg\" : \"Alerts were added.\" } Replace Alerts on a Queue Replace current queue alerts with a given list of alerts. This is for Pull Queue only. PUT /projects/ {Project ID} /queues/ {Queue Name} /alerts/ URL Parameters Project ID : The project these messages belong to. Queue Name : The name of queue. Body Parameters Optional alerts : An array of alerts hashes containing required \"type\", \"queue\", \"trigger\", and optional \"direction\", \"snooze\" fields. Maximum number of alerts is 5. See Queue Alerts to learn more. Request { \"alerts\" : [ { \"type\" : \"progressive\" , \"direction\" : \"desc\" , \"trigger\" : 1000 , \"queue\" : \"my_queue_for_alerts\" } ] } Note: to clear all alerts on a queue send an empty alerts array like so: { \"alerts\" : [] } Response { \"msg\" : \"Alerts were replaced.\" } Remove Alerts from a Queue Remove alerts from a queue. This is for Pull Queue only. DELETE /projects/ {Project ID} /queues/ {Queue Name} /alerts/ URL Parameters Project ID : The project these messages belong to. Queue Name : The name of queue. Body Parameters Optional alerts : An array of alerts hashes containing \"id\" field. See Queue Alerts to learn more. Request { \"alerts\" : [ { \"id\" : \"5eee546df4a4140e8638a7e5\" } ] } Response { \"msg\" : \"Alerts were deleted.\" } Remove Alert from a Queue by ID Remove alert from a queue by its ID. This is for Pull Queue only. DELETE /projects/ {Project ID} /queues/ {Queue Name} /alerts/ {Alert ID} URL Parameters Project ID : The project these messages belong to. Queue Name : The name of queue. Alert ID : The id of the alert to delete. Response { \"msg\" : \"Alerts were deleted.\" } Add Subscribers to a Queue Add subscribers (HTTP endpoints) to a queue. This is for Push Queues only. Endpoint POST /projects/ {Project ID} /queues/ {Queue Name}/subscribers URL Parameters Project ID : Project the queue belongs to Queue Name : Name of the queue. If the queue does not exist, it will be created for you. Body Parameters Optional The following parameters are all related to Push Queues. subscribers : An array of subscriber hashes containing a required \"url\" field and an optional \"headers\" map for custom headers. See below for example. See Push Queues to learn more about types of subscribers. Request { \"subscribers\" : [ { \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_2\" , \"headers\" : { \"Content-Type\" : \"application/json\" } } ] } Response { \"id\" : \"50eb546d3264140e8638a7e5\" , \"name\" : \"pushq-demo-1\" , \"size\" : 7 , \"total_messages\" : 7 , \"project_id\" : \"4fd2729368a0197d1102056b\" , \"retries\" : 3 , \"push_type\" : \"multicast\" , \"retries_delay\" : 60 , \"subscribers\" : [ { \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_1\" }, { \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_2\" , \"headers\" : { \"Content-Type\" : \"application/json\" }} ] } Remove Subscribers from a Queue Remove subscriber from a queue. This is for Push Queues only. Endpoint DELETE /projects/ {Project ID} /queues/ {Queue Name}/subscribers URL Parameters Project ID : Project the queue belongs to Queue Name : Name of the queue. If the queue does not exist, it will be created for you. Body Parameters Optional The following parameters are all related to Push Queues. subscribers : An array of subscriber hashes containing a \"url\" field. See below for example. Request { \"subscribers\" : [ { \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_2\" } ] } Response { \"id\" : \"50eb546d3264140e8638a7e5\" , \"name\" : \"pushq-demo-1\" , \"size\" : 7 , \"total_messages\" : 7 , \"project_id\" : \"4fd2729368a0197d1102056b\" , \"retries\" : 3 , \"push_type\" : \"multicast\" , \"retries_delay\" : 60 , \"subscribers\" : [ { \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_1\" } ] } Clear All Messages from a Queue This call deletes all messages on a queue, whether they are reserved or not. Endpoint POST /projects/ {Project ID} /queues/ {Queue Name} /clear URL Parameters Project ID : The project these messages belong to. Queue Name : The name of the queue. Response { \"msg\" : \"Cleared\" } Add Messages to a Queue This call adds or pushes messages onto the queue. Endpoint POST /projects/ {Project ID} /queues/ {Queue Name} /messages URL Parameters Project ID : The project these messages belong to. Queue Name : The name of the queue. If the queue does not exist, it will be created for you. Message Object Multiple messages may be added in a single request, provided that the messages should all be added to the same queue. Each message object should contain the following keys: Required body : The message data Optional timeout : After timeout (in seconds), item will be placed back onto queue. You must delete the message from the queue to ensure it does not go back onto the queue. Default is 60 seconds. Minimum is 30 seconds, and maximum is 86,400 seconds (24 hours). delay : The item will not be available on the queue until this many seconds have passed. Default is 0 seconds. Maximum is 604,800 seconds (7 days). expires_in : How long in seconds to keep the item on the queue before it is deleted. Default is 604,800 seconds (7 days). Maximum is 2,592,000 seconds (30 days). Request { \"messages\" : [ { \"body\" : \"This is my message 1.\" }, { \"body\" : \"This is my message 2.\" , \"timeout\" : 30 , \"delay\" : 2 , \"expires_in\" : 86400 } ] } Response { \"ids\" : [ \"message 1 ID\" , \"message 2 ID\" ], \"msg\" : \"Messages put on queue.\" } Add Messages to a Queue via Webhook By adding the queue URL below to a third party service that supports webhooks, every webhook event that the third party posts will be added to your queue. The request body as is will be used as the \"body\" parameter in normal POST to queue above. Endpoint POST /projects/ {Project ID} /queues/ {Queue Name} /messages/webhook URL Parameters Project ID : The project these messages belong to. Queue Name : The name of the queue. If the queue does not exist, it will be created for you. Get Messages from a Queue This call gets/reserves messages from the queue. The messages will not be deleted, but will be reserved until the timeout expires. If the timeout expires before the messages are deleted, the messages will be placed back onto the queue. As a result, be sure to delete the messages after you're done with them. Endpoint GET /projects/ {Project ID} /queues/ {Queue Name} /messages URL Parameters Project ID : The Project these messages belong to. Queue Name : The name of queue. Optional Parameters n : The maximum number of messages to get. Default is 1. Maximum is 100. Note: You may not receive all n messages on every request, the more sparse the queue, the less likely you are to receive all n messages. timeout : After timeout (in seconds), item will be placed back onto queue. You must delete the message from the queue to ensure it does not go back onto the queue. If not set, value from POST is used. Default is 60 seconds, minimum is 30 seconds, and maximum is 86,400 seconds (24 hours). wait : Time in seconds to wait for a message to become available. This enables long polling. Default is 0 (does not wait), maximum is 30. delete : true/false. This will delete the message on get. Be careful though, only use this if you are ok with losing a message if something goes wrong after you get it. Default is false. Sample Request GET /projects/ {Project ID} /queues/ {Queue Name} /messages? n=10 & timeout=120 Response { \"messages\" : [ { \"id\" : 1 , \"body\" : \"first message body\" , \"timeout\" : 600 , \"reserved_count\" : 1 , \"push_status\" : { \"retries_remaining\" : 1 } }, { \"id\" : 2 , \"body\" : \"second message body\" , \"timeout\" : 600 , \"reserved_count\" : 1 , \"push_status\" : { \"retries_remaining\" : 1 } } ] } Get Message by ID Get a message by ID. Endpoint GET /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} URL Parameters Project ID : The Project these messages belong to. Queue Name : The name of queue. Message ID : The id of the message to release. Sample Request GET /projects/4ccf55250948510894024119/queues/test_queue/messages/5981787539458424851 Response { \"id\" : \"5924625841136130921\" , \"body\" : \"hello 265\" , \"timeout\" : 60 , \"status\" : \"deleted\" , \"reserved_count\" : 1 } Release a Message on a Queue Releasing a reserved message unreserves the message and puts it back on the queue as if the message had timed out. Endpoint POST /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} /release URL Parameters Project ID : The project these messages belong to. Queue Name : The name of queue. Message ID : The id of the message to release. Body Parameters delay : The item will not be available on the queue until this many seconds have passed. Default is 0 seconds. Maximum is 604,800 seconds (7 days). Request Body { \"delay\" : 60 } A JSON document body is required even if all parameters are omitted. {} Response { \"msg\" : \"Released\" } Touch a Message on a Queue Touching a reserved message extends its timeout to the duration specified when the message was created. Default is 60 seconds. Endpoint POST /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} /touch URL Parameters Project ID : The project these messages belong to. Queue Name : The name of queue. Message ID : The id of the message to touch. Request Any empty JSON body. {} Response { \"msg\" : \"Touched\" } Delete a Message from a Queue This call will delete the message. Be sure you call this after you're done with a message or it will be placed back on the queue. Endpoint DELETE /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} URL Parameters Project ID : The project these messages belong to. Queue Name : The name of queue. Message ID : The id of the message to delete. Response { \"msg\" : \"Deleted\" } Delete Multiple Messages from a Queue This call will delete multiple messages in one call. Endpoint DELETE /projects/ {Project ID} /queues/ {Queue Name} /messages URL Parameters Project ID : The project these messages belong to. Queue Name : The name of queue. Body Parameters ids : An array of message ids as string. Request Body { \"ids\" : [ \"MESSAGE_ID_1\" , \"MESSAGE_ID_2\" ] } Response { \"msg\" : \"Deleted\" } Get Push Status for a Message You can retrieve the push status for a particular message which will let you know which subscribers have received the message, which have failed, how many times it's tried to be delivered and the status code returned from the endpoint. GET /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} /subscribers URL Parameters Project ID : The project these messages belong to. Queue Name : The name of queue. Message ID : The id of the message to retrieve status for. Response { \"subscribers\" : [ { \"retries_delay\" : 60 , \"retries_remaining\" : 2 , \"status_code\" : 200 , \"status\" : \"deleted\" , \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_2\" , \"id\" : \"5831237764476661217\" }, { \"retries_delay\" : 60 , \"retries_remaining\" : 2 , \"status_code\" : 200 , \"status\" : \"deleted\" , \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_1\" , \"id\" : \"5831237764476661218\" } ] } Acknowledge / Delete Push Message for a Subscriber This is only for use with long running processes that have previously returned a 202. Read Push Queues page for more information on Long Running Processes DELETE /projects/ {Project ID} /queues/ {Queue Name} /messages/ {Message ID} /subscribers/ {Subscriber ID} URL Parameters Project ID : The project these messages belong to. Queue Name : The name of queue. Message ID : The id of the message. Subscriber ID : The id of the subscriber to delete. Response { \"msg\" : \"Deleted\" }"
}, {
"title": "Beanstalk Interface",
"url": "/mq/reference/beanstalk/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "You can use any of the Beanstalkd clients with IronMQ. The list of supported languages is extensive and so there is sure to one for your language of choice. Table of Contents Beanstalk Libraries Host Information Authentication Tubes vs Queues Notes Beanstalk Libraries Here's a sample list of languages available (with multiple clients libs to choose from for many languages):",
"body": "You can use any of the Beanstalkd clients with IronMQ. The list of supported languages is extensive and so there is sure to one for your language of choice. Table of Contents Beanstalk Libraries Host Information Authentication Tubes vs Queues Notes Beanstalk Libraries Here's a sample list of languages available (with multiple clients libs to choose from for many languages): C C++ Clojure Django Common Lisp Erlang Go Haskell Io Java Node.js OCaml Perl PHP Python Rails Ruby Scheme (Chicken) .NET/C# Check out list of client libraries on GitHub Host Information To connect to IronMQ using Beanstalkd, use one of the hosts on our Clouds page . NOTE : Beanstalkd is currently not supported on Rackspace. Please use one of our HTTP clients if you are on Rackspace. Authentication Because IronMQ requires authentication, the first command you send must put a message onto the queue with the contents: oauth {TOKEN} {PROJECT_ID} The DRAINING response will be returned if authentication fails or if any other command is sent before authentication. Tubes vs Queues Note that a Beanstalkd tube is synonymous with an IronMQ {Queue Name} within the REST/HTTP API. If a tube/queue name is not specified, then the queue name default will be used within IronMQ. Notes The ID you receive when using the Beanstalkd interface will not be the same as the HTTP interface so you cannot use them interchangeably. At the moment, there are some commands that IronMQ does not implement. These include: bury peek, peek-delayed, peek-buried kick list-tubes stats, stats-job, stats-tube pause-tube quit "
}, {
"title": "Choosing the Cloud Your Queues Run On",
"url": "/mq/reference/clouds/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "IronMQ is available on multiple cloud hosts, so your queue can run in the same infrastructure your app does. This saves time on latencies and allows you to spread your queues across multiple clouds, if desired, to maximize your queues' availability. Each of the official IronMQ client libraries allows you to change a configuration setting to set the host the",
"body": "IronMQ is available on multiple cloud hosts, so your queue can run in the same infrastructure your app does. This saves time on latencies and allows you to spread your queues across multiple clouds, if desired, to maximize your queues' availability. Each of the official IronMQ client libraries allows you to change a configuration setting to set the host the library connects to. Changing your cloud is as simple as selecting the host you want. Cloud Host AWS US-EAST mq-aws-us-east-1.iron.io AWS EU-WEST mq-aws-eu-west-1.iron.io Rackspace ORD mq-rackspace-ord.iron.io Rackspace LON mq-rackspace-lon.iron.io Rackspace DFW Pro Plans Only - Email Support Alternative domains can be found here in case of dns failures . NOTE : Beanstalkd is currently not supported on Rackspace. Please use one of our HTTP clients if you are on Rackspace. Check your library's documentation for information on switching the host within the library. Do we not support your cloud of choice? Let us know , and we'll try to add support for it. Setting Host It is useful to quickly change your host in cases where your region has gone down. If want to set the Host, Post, and Protocol specifically, simply include those keys in that project's iron.json file: iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" \"host\" : \"mq-aws-us-east-1.iron.io\" } iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" \"host\" : \"mq-rackspace-lon.iron.io\" } iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" \"host\" : \"mq-aws-eu-west-1.iron.io\" } "
}, {
"title": "Configuring the Official Client Libraries",
"url": "/mq/reference/configuration/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Many of the client libraries make use of a global configuration scheme for all of Iron.io services. This approach lets you set and manage your tokens and project IDs in a centralized manner and make them available across all of Iron.io's services, even across workspaces. This scheme allows you to spend less time on configuration issues and more on writing",
"body": "Many of the client libraries make use of a global configuration scheme for all of Iron.io services. This approach lets you set and manage your tokens and project IDs in a centralized manner and make them available across all of Iron.io's services, even across workspaces. This scheme allows you to spend less time on configuration issues and more on writing code. It also supports the design pattern that calls for strict separation of configuration information from application code. The two most common variables used in configuration are the project ID and the token . The project ID is a unique identifier for your project and can be found in the HUD . The token is one of your OAuth2 tokens, which can be found on their own page in the HUD. Table of Contents Quick Start About the Scheme The Overall Hierarchy The Environment Variables The File Hierarchy The JSON Hierarchy Example Setting Host Example Accepted Values Quick Start Create a file called .iron.json in your home directory (i.e., ~/.iron.json ) and enter your Iron.io credentials: .iron.json { \"token\" : \"MY_TOKEN\" , \"project_id\" : \"MY_PROJECT_ID\" } The project_id you use will be the default project to use. You can always override this in your code. Alternatively, you can set the following environment variables: IRON_TOKEN = MY_TOKEN IRON_PROJECT_ID = MY_PROJECT_ID That's it, now you can get started. About the Scheme The configuration scheme consists of three hierarchies: the file hierarchy, the JSON hierarchy, and the overall hierarchy. By understanding these three hierarchies and how clients determine the final configuration values, you can build a powerful system that saves you redundant configuration while allowing edge cases. The Overall Hierarchy The overall hierarchy is simple to understand: local takes precedence over global. The configuration is constructed as follows: The global configuration file sets the defaults according to the file hierarchy. The global environment variables overwrite the global configuration file's values. The product-specific environment variables overwrite everything before them. The local configuration file overwrites everything before it according to the file hierarchy. The configuration file specified when instantiating the client library overwrites everything before it according to the file hierarchy. The arguments passed when instantiating the client library overwrite everything before them. The Client's Environment Variables set in iron.json The environment variables the scheme looks for are all of the same formula: the camel-cased product name is switched to an underscore (\"IronWorker\" becomes \"iron_worker\") and converted to be all capital letters. For the global environment variables, \"IRON\" is used by itself. The value being loaded is then joined by an underscore to the name, and again capitalised. For example, to retrieve the OAuth token, the client looks for \"IRON_TOKEN\". In the case of product-specific variables (which override global variables), it would be \"IRON_WORKER_TOKEN\" (for IronWorker). Accepted Values The configuration scheme looks for the following values: project_id : The ID of the project to use for requests. token : The OAuth token that should be used to authenticate requests. Can be found in the HUD . host : The domain name the API can be located at. Defaults to a product-specific value, but always using Amazon's cloud. protocol : The protocol that will be used to communicate with the API. Defaults to \"https\", which should be sufficient for 99% of users. port : The port to connect to the API through. Defaults to 443, which should be sufficient for 99% of users. api_version : The version of the API to connect through. Defaults to the version supported by the client. End-users should probably never change this. Note that only the project_id and token values need to be set. They do not need to be set at every level of the configuration, but they must be set at least once by the levels that are used in any given configuration. It is recommended that you specify a default project_id and token in your iron.json file. The File Hierarchy The hierarchy of files is simple enough: if a file named .iron.json exists in your home folder, that will provide the defaults. if a file named iron.json exists in the same directory as the script being run, that will be used to overwrite the values from the .iron.json file in your home folder. Any values in iron.json that are not found in .iron.json will be added; any values in .iron.json that are not found in iron.json will be left alone; any values in .iron.json that are found in iron.json will be replaced with the values in iron.json . This allows a lot of flexibility: you can specify a token that will be used globally (in .iron.json ), then specify the project ID for each project in its own iron.json file. You can set a default project ID, but overwrite it for that one project that uses a different project ID. The JSON Hierarchy Each file consists of a single JSON object, potentially with many sub-objects. The JSON hierarchy works in a similar manner to the file hierarchy: the top level provides the defaults. If the top level contains a JSON object whose key is an Iron.io service ( iron_worker , iron_mq , or iron_cache ), that will be used to overwrite those defaults when one of their clients loads the config file. This allows you to define a project ID once and have two of the services use it, but have the third use a different project ID. Example In the event that you wanted to set a token that would be used globally, you would set ~/.iron.json to look like this: .iron.json { \"token\" : \"YOUR TOKEN HERE\" } To follow this up by setting your project ID for each project, you would create an iron.json file in each project's directory: iron.json { \"project_id\" : \"PROJECT ID HERE\" } If, for one project, you want to use a different token, simply include it in that project's iron.json file: iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" } Now for that project and that project only , the new token will be used. If you want all your IronCache projects to use a different project ID, you can put that in the ~/.iron.json file: .iron.json { \"project_id\" : \"GLOBAL PROJECT ID\" , \"iron_cache\" : { \"project_id\" : \"IRONCACHE ONLY PROJECT ID\" } } If you don't want to write things to disk or on Heroku or a similar platform that has integrated with Iron.io to provide your project ID and token automatically, the library will pick them up for you automatically. Setting Host It is useful to quickly change your host in cases where your region has gone down. If want to set the Host, Post, and Protocol specifically, simply include those keys in that project's iron.json file: iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" \"port\" : 443 , \"protocol\" : \"https\" , \"host\" : \"mq-rackspace-ord.iron.io\" } "
}, {
"title": "IronMQ Environment",
"url": "/mq/reference/environment/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Table of Contents Message Structure Message Constraints Queue Attributes Security Groups and IP Ranges Message Structure The message structure is flexible and straight-forward. Messages can be variable in size and can contain almost any text or data format. Message Element Type Token OAuth2 access token Delay Integer (seconds) Timeout Integer (seconds) Expiration Integer (seconds) Message body ASCII text Message Constraints",
"body": " Table of Contents Message Structure Message Constraints Queue Attributes Security Groups and IP Ranges Message Structure The message structure is flexible and straight-forward. Messages can be variable in size and can contain almost any text or data format. Message Element Type Token OAuth2 access token Delay Integer (seconds) Timeout Integer (seconds) Expiration Integer (seconds) Message body ASCII text Message Constraints The basic message handling operation is put-get-delete. Messages are put on the queue by senders. The messages can have delays associated with them. If included, the message is not made available on the queue until the delay is up (default is 0 or no delay). Receivers get one or more messages (up to 100). Once the receive is done processing a message, it deletes it. If a message is not deleted prior to the timeout (default 60 sec), it is put back on the queue. Messages on the queue will expire after a certain amount of time (default is 7 days). Message Var Default Maximum Notes Message Size dependent on plan 64KB, 256KB Includes the entire request (delay, timeout, expiration). Limit will vary depending on current plan. Please view plan comparision page here . If message size limits higher than 256KB are needed, please contact [email protected] . Delay 0sec 604,800sec Message is made available on queue after the delay expires. Timeout 60sec 86,400sec Message goes back on queue after timeout unless deleted. Expiration 604,800sec 2,592,000sec Equates to 7 days and 30 days, respectively. Messages per Get 1 100 One or more messages can be handled at a time. Queue Attributes Queues have their own set of attributes. To get the information about a queue, use the Info API call . The following is a list of all the queue attributes: Common Attributes Name Explanation name Name of the queue. (Names with spaces should URL encode/use \"%20\".) id Unique queue's ID. size Current queue size. It's usually 0 for Push Queues. total_messages Number of messages which were posted to the queue. project_id ID of the project that owns the queue. Attributes Related to Push Queues Name Explanation push_type Push queue type. Either multicast (default) or unicast . retries Maximum number of times messages will be sent to each HTTP endpoint. Messages will not be resent after a call receives an HTTP response with a status code of 200. Default is 3 seconds. Maximum is 100 seconds. retries_delay Delay between retries in seconds. Default is 60 seconds. Minimum is 3 and maximum is 86400 seconds. subscribers List of subscribers, format is [{url: \"http://...\"}, ...] . error_queue Enable error queue {\"error_queue\": \"ERROR_QUEUE_NAME\"} . Empty string defaults to disabled {\"error_queue\": \"\"} Defaults to disabled if not declared. note: error queue will not appear in the hud.iron.io until first error occurs. Security Groups and IP Ranges Iron.io provides an AWS security group for IronMQ, generally used in the case of push queues, In the event users want to isolate AWS EC2, RDS, or other services to these groups/ranges. EC2 Security Group Account ID Security Group String simple_deployer_web 7227-1646-5567 722716465567/simple_deployer_web "
}, {
"title": "IronMQ Reference",
"url": "/mq/reference/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "The IronMQ reference documentation contains all the low-level information about IronMQ. Every little detail has been recorded here for you. REST/HTTP API Every endpoint, every parameter of our API is at your fingertips. Beanstalk Interface All the information you need to use your Beanstalk client with IronMQ's scalable backend. Push Queues Everything you need to know about Push Queues. Environment",
"body": "The IronMQ reference documentation contains all the low-level information about IronMQ. Every little detail has been recorded here for you. REST/HTTP API Every endpoint, every parameter of our API is at your fingertips. Beanstalk Interface All the information you need to use your Beanstalk client with IronMQ's scalable backend. Push Queues Everything you need to know about Push Queues. Environment Know exactly what environment your messages will be passed through before you queue them. Configuration Everything you need to know to make the IronMQ client libraries work they way you want them to. Cloud Providers Get the specifics on how to choose where your message queues live. Something Missing? Can't find the information you need here? Our engineers are always available and will be happy to answer questions. "
}, {
"title": "IronMQ Push Queues",
"url": "/mq/reference/push_queues/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Table of Contents Overview Subscribers Push Queue Settings Queueing Messages Retries Error Queues Checking Status How the Endpoint Should Handle Push Messages Response Codes Timeout Long Running Processes - aka 202's Push Queue Headers Encryption and Security Important Notes Troubleshooting Your Push Queues Using Error Queues feature Using Requestb.in Testing on localhost with Ngrok Overview Blog Post for Overview .",
"body": " Table of Contents Overview Subscribers Push Queue Settings Queueing Messages Retries Error Queues Checking Status How the Endpoint Should Handle Push Messages Response Codes Timeout Long Running Processes - aka 202's Push Queue Headers Encryption and Security Important Notes Troubleshooting Your Push Queues Using Error Queues feature Using Requestb.in Testing on localhost with Ngrok Overview Blog Post for Overview . You should also review the MQ API for push queue related endpoints . Subscribers Subscribers are simply URL's that IronMQ will post to whenever a message is posted to your queue. There are currently three types subscribers supported, all differentiated by the URL scheme (first part of the URL): HTTP endpoints: urls with the http or https prefix for instance, http://myapp.com/some/endpoint or https://myapp.com/some/endpoint. WARNING: Do not use the following RFC 3986 Reserved Characters within your in the naming of your subscriber endpoints. IronMQ endpoints: IronMQ endpoints point to another queue on IronMQ. Use these to do fan-out to multiple queues. More info on the IronMQ URL format below. IronWorker endpoints: IronWorker endpoints will fire up an IronWorker task with the message body as the payload. More info on the IronWorker URL format below. Iron.io URL Formats The basic format is similar to any other URL: [ironmq_or_ironworker]://[project_id:token]@[host]/queue_or_worker_name Here are some examples: ironmq:///queue name - refers to the queue named \"queue name\" in the same project. ironmq://project id:token@/queue name - refers to the queue named \"queue_name\" in a different project on same region/cloud. ironmq://project id:[email protected]/queue name - refers to the queue named \"queue_name\" on a different region/cloud. ironworker:///worker name - refers to a worker on IronWorker called \"worker name\". Push Queue Settings To turn a queue into a push queue (or create one), POST to your queue endpoint with the following parameters: subscribers - required - an array of hashes containing subscribers. eg: {\"url\": \"http://myserver.com/endpoint\"} . The maximum is 64kb for JSONify array of subscribers' hashes. WARNING: Do not use the following RFC 3986 Reserved Characters within your in the naming of your subscriber endpoints. ! * ' ( ) ; : @ & = + $ , / ? # [ ] push_type - multicast or unicast. Default is multicast. Set this to 'pull' to revert back to a pull queue. retries - number of times to retry. Default is 3. Maximum is 100. retries_delay - time in seconds between retries. Default is 60. Minimum is 3 and maximum is 86400 seconds. error queue - the name of another queue where information about messages that can't be delivered after retrying retries number of times will be placed. Pass in an empty string to disable Error queues. Default is disabled. The default queue type for an error queue will be a pull queue. See <a href=\"#error queues\">Error Queues section below. Queueing Messages This is the same as posting any message to IronMQ. Here is a curl example to post a message to the queue: You should get a curl response that looks like this: Retries IronMQ will automatically retry if it fails to deliver a message. This can be either a connection error, an error response (eg: 5xx), or any other scenario that does not return 2xx response. The behavior is a bit different depending on whether it's unicast or multicast as follows: multicast treats each endpoint separately and will try each endpoint once per retry. If one endpoint fails, it will retry that single endpoint after retries_delay, it won't retry endpoints that were successful. unicast will try one endpoint in the set of subscribers. If it succeeds, that message is considered delivered. If it fails, a different endpoint is tried immediately and this continues until a successful response is returned or all endpoints have been tried. If there is no successful response from all endpoints, then the message will be retried after retries_delay. Error Queues Error queues are used to get information about messages that we were unable to deliver due to errors/failures while trying to push a message. To create an error queue Post to your push queue a message with the \"error_queue\" option defined. {\"push_type\":\"multicast/unicast\", \"subscribers\": [ {\"url\": \"http://thiswebsitewillthrowanerror.com\"} ], \"error_queue\": \"MY_EXAMPLE_ERROR_QUEUE\"} If a push queue is set with the error_queue parameter, then after the set number of retries , a message will be put in the named error queue and viewable via your account dashboard. By default, the error queue will be a pull queue. NOTE: An error queue will not appear in your dashboard until an initial error message is received. The error queue message will contain the following information: You can look up the original message if needed via the GET message endpoint using the source_msg_id value. To turn off/disable an error queue Post to your push queue set the error queue option to an empty string. ex: \"error_queue\": \"\". { \"push_type\" : \"multicast/unicast\" , \"subscribers\" : [ { \"url\" : \"http://thiswebsitewillthrowanerror.com\" } ], \"error_queue\" : \"\" } NOTE: Ommitting the \"error_queue\" option will not disable the error queue. Checking Status If you want the detailed status of the delivery to each of your subscribers, you can check that too. In the curl example below, you'll need to exchange MESSAGE_ID with the id that was returned in the response above when you posted a message. This should return a response like this: How the Endpoint Should Handle Push Messages These are the things the endpoint that is receiving the push should know about. Push messages' bodies will be sent to endpoints as is (strings) as POST request body. To obtain message's body just read request body. The receiving endpoint must respond with a 200 or 202 if they have accepted the message successfully. Response Codes 200 - message is deleted / acknowledged and removed from the queue. 202 - message is reserved until explicitly deleted or the timeout is exceeded. See 202 section below. 4XX or 5XX - the push request will be retried. Timeout If an endpoint doesn't respond within timeout, it's marked as failed/error and will be retried. Default timeout is 60 seconds. If you'd like to take more time to process messages, see 202 section below. Long Running Processes - aka 202 If you'd like to take some time to process a message, more than the 60 second timeout, you must respond with HTTP status code 202. Be sure to set the \"timeout\" value when posting your message to the maximum amount of time you'd like your processing to take. If you do not explicitly delete the message before the \"timeout\" has passed, the message will be retried. To delete the message, check the \"Iron-Subscriber-Message-Url\" header and send a DELETE request to that URL. Push Queue Headers Each message pushed will have some special headers as part of the HTTP request. User-Agent - static - \"IronMQ Pusherd\" Iron-Message-Id - The ID for your original message allowing you to check the status Iron-Subscriber-Message-Id - The ID for the message to the particular subscriber. Iron-Subscriber-Message-Url - A URL to delete/acknowledge the message. Generally used with the 202 response code to tell IronMQ that you're done with the message. Send a DELETE http request to this URL to delete it. Encryption and Security When you are using your private API as subscriber and want to secure connection to IronMQ you are able to use HTTPS endpoints. https://subscriber.domain.com/push/endpoint Also, if you want some kind of authentication you can use various standards for authorization with tokens. Like OAuth or OpenID. In this case, specify a token in your subscriber's URL. https://subscriber.domain.com/push/endpoint?auth=TOKEN Another possibility to specify a token is put it to your messages' bodies and parse it on your side. In this case a token will be encrypted by SSL/TLS. Important Notes You should not push and pull from a queue, a push queue's messages will be deleted/acknowledged immediately and not be available for pulling. When a Pull Queue contains messages and you turn it to Push Queue you are still able to get messages from the queue. Also, messages put on the queue before it becomes a Push Queue will not be sent to your subscribers. New messages will be processed as usual for Push Queues, and pushed to your subscribers. To revert your Push Queue to regular Pull Queue just update push_type to \"pull\" . Do not use the following RFC 3986 Reserved Characters within your in the naming of your subscriber endpoints. Troubleshooting Push Queues Push queues are extremely powerful but do not by default give insight on what happens to your message once it leaves queue. Did it hit the endpoint? Did it retry multple times due to a server timeout error? Do I need to set a different content-type header? At Iron we have 3 recommended ways to debug problems you may encounter with your push queues: using IronMQ's Error Queue feature, RequestBin, and Ngrok Using Error Queues (IronMQ Feature) Error queues are vastly useful to record, document, and react to retries, errors, and bugs that involve your Message queue endpoint. See our Error Queue Documentation on how to setup and read error queue messages. Using RequestBin RequestBin is a very useful and free service provided by Iron.io's friends at Runscope that helps users debug all kinds of request to a unqiuely generated endpoint. A bin will keep the last 20 requests made to it and remain available for 48 hours after it was created.You can create a more permanent bin by signing up here . Step 1: go to http://requestb.in/ and click on \"Create a RequestBin Step 2: copy the unique url Step 3: paste it as a subscriber endpoint on your queue. This example shows us pasting it via the hud/dashboard interface but you an do the same using th raw api. Step 4: post a message to your push queue, return to your unique RequestBin's inspect page. Here you will be able to view and inspect your the headers and response body amongst other very useful information about your push queue's request. Seeing that your message was delivered successfully to a bin will easily tell you that there may be a problem with how your server is handling the message that is coming from your push queue. Often times it could be an endpoint that has not been coded to handle the post parameter's content type, endpoints that don't exist, or returning a bad response code due to internal server errors. Testing on localhost with Ngrok To be able to develop and test on your local machine, you'll need to make your localhost accessible for IronMQ. This can be easily done by tunneling it to the outside world with tools such as ngrok . Step 1: install ngrok Step 2: open ngrok to your localhost's port and recieve a unique subdomain on http://XXXXX.ngrok.com or https://XXXXX.ngrok.com Step 3: inspect your traffic via Ngrok's local interface at http://localhost:4040 Last Step: debug, debug, debug! You can reply the message to your local server at which point you can debug consistantly with your favorite debugging methods, i.e print statements and runtime debuggers. "
}, {
"title": "IronMQ Queue Alerts",
"url": "/mq/reference/queue_alerts/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Table of Contents Overview Alerts Parameters Alerts Messages Setting Alerts in Dashboard Example Alert Settings and their meaning Important Notes Overview Check out our Blog Post on Queue Alerts . Alerts, triggered when the queue hits a pre-determined number of messages (both ascending and descending), allow developers to notify other systems based on the activity of a queue. Actions include",
"body": " Table of Contents Overview Alerts Parameters Alerts Messages Setting Alerts in Dashboard Example Alert Settings and their meaning Important Notes Overview Check out our Blog Post on Queue Alerts . Alerts, triggered when the queue hits a pre-determined number of messages (both ascending and descending), allow developers to notify other systems based on the activity of a queue. Actions include things like: auto-scaling, failure detection, load-monitoring, and system health. Alerts Parameters IronMQ provides a number of routes to manipulate queue alerts. Add and Update Alerts Endpoints Add alerts to a queue Update alerts on a queue Request body example: { \"alerts\" : [ { \"type\" : \"fixed\" , \"direction\" : \"asc\" , \"trigger\" : 1000 , \"queue\" : \"queue-to-post-size-alerts-to\" , \"snooze\" : 120 }, { \"type\" : \"progressive\" , \"direction\" : \"desc\" , \"trigger\" : 100 , \"queue\" : \"queue-to-post-progressive-to\" } ] } alerts - optional - array of hashes containing alerts hashes. Required type - set to \"fixed\" or \"progressive\" A \"fixed\" alert will trigger an alert when the queue size passes the value set by trigger parameter. A \"progressive\" alert will trigger when queue size passes any of values calculated by trigger * N where N >= 1 . Example: trigger is set to 10, alerts will be triggered at queue sizes 10, 20, 30, etc. trigger - must be integer value > 0. Used to calculate actual values of queue size when alert must be triggered. See type field description. queue Name of queue which will be used to post alert messages. Optional direction - set to \"asc\" (default) or \"desc\" An \"asc\" setting will trigger alerts as the queue grows in size. A \"desc\" setting will trigger alerts as the queue decreases in size. snooze - Number of seconds between alerts. Must be integer value >= 0 If alert must be triggered but snooze is still active, alert will be omitted. Alerts Messages Alert messages are JSONified strings in the following format: { \"source_queue\" : \"test_queue\" , \"queue_size\" : 12 , \"alert_id\" : \"530392f41185ab1f2a0005f7\" , \"alert_type\" : \"progressive\" , \"alert_direction\" : \"asc\" , \"alert_trigger\" : 5 , \"created_at\" : \"2014-02-18T17:10:43Z\" } Setting Alerts in Dashboard You can easily create an alert through our interface on our queue view in the Iron.io dashboard. Navigate down and click view queue alerts on the left hand side of the queue view. Here you can add up to 5 alerts per queue. Example Alert Settings and their meaning The following serve as examples of how you may do about setting your alert settings. { \"type\": \"progressive\", \"direction\": \"asc\", \"trigger\": 1000, \"queue\": \"worker_push_queue\" } Interpretation: For every progressive increment of 1,000 messages on my queue in the ascending direction trigger an alert to my queue entitled “worker push queue”. This pattern would trigger additional workers to run for seamless autoscaling. { \"type\": \"fixed\", \"direction\": \"asc\", \"trigger\": 1, \"queue\": \"worker_polling_queue\" } Interpretation: When my queue passes the fixed value of 1 post to my “worker polling queue”. This pattern would trigger a worker to run whenever there are items within the queue. Important Notes Our system checks for duplicate alerts each time you add a new alert to a queue. It the compares type , direction , and trigger parameters to find duplicates. If one or more of the new alerts is a duplicate, we will return a HTTP 400 error and message as such: {\"msg\": \"At least one new alert duplicates current queue alerts.\"} . When you try to add alerts to a Push Queue or convert Pull Queue with alerts to a Push Queue, IronMQ will respond with HTTP 400 error and message as such: {\"msg\": \"Push queues do not support alerts.\"} "
}, {
"title": "IronMQ On-Premise Installation",
"url": "/mq/3/getting_started/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Getting Started Download Recommended System Requirements Install Setup New User and Project Updating Installation Backups Download Single Server Binary You can download our single server evaluation version here . Recommended System Requirements Operating System: x64 linux (kernel 3+) or docker container RAM : 8GB+ CPU : Multicore CPU Storage : SSD Drive Unpack, Install, and Start Unpack the provided archive",
"body": " Getting Started Download Recommended System Requirements Install Setup New User and Project Updating Installation Backups Download Single Server Binary You can download our single server evaluation version here . Recommended System Requirements Operating System: x64 linux (kernel 3+) or docker container RAM : 8GB+ CPU : Multicore CPU Storage : SSD Drive Unpack, Install, and Start Unpack the provided archive unzip ironmq-x.y.z....zip You will end up with a directory called ironmq , cd into that directory to continue. Run install script ./iron install Start Services ./iron start Setup New User and Project With the server running in a seperate terminal window run the following commands to create a new user and a new project. create a new user by default the admin password is set to superToken123 , it is recommended you change this after initially creating your first user account. ./iron -t superToken123 create user [email protected] password create a new project Grab the token that's printed after previous command, you'll need it for the next ones. ./iron -t NEWTOKEN create project myproject Then you can use that new project with the new token to use the API. Configuration Some Cases you may want to change the default configuration options we have setup in our single server evaluation. edit ironauth/config.json and/or ironmq/config.json Note: be sure super user.token from ironauth config matches super token in ironmq config. Locate the configuration file config.json within /bin/mq/config.json { \"license\": { \"description\": \"This is your license key provided by Iron.io\", \"key\": \"DONTCHANGEME\" }, \"api\": { \"http_port\": 8080 }, \"auth\": { \"host\": \"http://localhost:8090\" }, \"logging\": { \"to\": \"stdout/stderr\", \"level\": \"debug\", \"prefix\": \"\" }, \"stathat\":{ \"email\":\"\" \"prefix\":\"\" } \"pusher\": { \"num_queues_brokers\": 5, \"num_messages_consumers\": 50, \"dial_timeout\": 10, \"request_timeout\": 60 }, \"aes_key_description\": \"Key for generating id's.\", \"aes_key\": \"770A8A65DA156D24EE2A093277530142\", \"data\": { \"dir_description\": \"Where data files will be stored\", \"dir\": \"../data\", \"cache_size_description\": \"Size of cache in MB -- don't get carried away\", \"cache_size\": 128 }, \"host\": \"locahost\" } license - do not modify api - this is the default port for your IronMQ server auth - this is the default host for your IronMQ Auth Server logging - by default logs will be output to stdout/stderr. A prefix is useful should you be storing logs for a service like papertrail stathat - IronMQ and IronAuth will both default to sending logs in the form of metadata to Iron.io's internal tools. We would appreciate keeping this turned on in order to better assist with performance and configuration questions. pusher - do not modify stathat - IronMQ and IronAuth will both default to sending logs in the form of metadata to Iron.io's internal tools. We would appreciate keeping this turned on in order to better assist with performance and configuration questions. aeskey - do not modify data dir - this is the data directory that your server will be using. cache_size - this is the size of the cache in MB. Update Installation Get the latest zip package and unzip it (same as in getting started) Stop running services sudo stop ironmq sudo stop ironauth Install upgrade ./iron install Start services again sudo start ironauth sudo start ironmq Backing Up Data CAUTION: Hot backups during runtime are currently not supported, you must stop the services before backing up. Stop services First stop the service. If using Upstart: sudo stop ironmq sudo stop ironauth Copy data directory Make a copy of the data directory to another directory. Data files will be stored at $HOME/iron/data by default if you haven't changed the configs. Start services After you've copied the files, you can start the services back up again: sudo start ironauth sudo start ironmq Archive safely. "
}, {
"title": "IronMQ On-Premise Overview",
"url": "/mq/3/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "IronMQ On-Premise is a fully featured high-performance messaging solution that can be more easily deployed in high availability configurations across clouds. Getting Started Download System Requirements Installation Setup User Tokens Updating Installation Backups Reference REST/HTTP API Client Libraries Integrations Amazon SQS Protocol",
"body": " IronMQ On-Premise is a fully featured high-performance messaging solution that can be more easily deployed in high availability configurations across clouds. Getting Started Download System Requirements Installation Setup User Tokens Updating Installation Backups Reference REST/HTTP API Client Libraries Integrations Amazon SQS Protocol "
}, {
"title": "IronMQ On-Premise Installation",
"url": "/mq/3/integrations/amazon_sqs/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "SQS Support You can use any SQS client by changing the host and using the following as your access key id and secret key id: Access Key Id: {project_id}:{token} Secret Key Id: {anything, this is ignore}",
"body": "SQS Support You can use any SQS client by changing the host and using the following as your access key id and secret key id: Access Key Id: {project_id}:{token} Secret Key Id: {anything, this is ignore} "
}, {
"title": "IronMQ v3 API Reference",
"url": "/mq/3/reference/api/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Contents Changes Global Stuff Queues Create Queue Get Queue Update Queue Delete Queue List Queues Add or Update Subscribers Replace Subscribers Remove Subscribers Messages Post Messages - Core operation to add messages to a queue Post Messages via Webhook Reserve/Get Messages - Core operation to get message(s) off the queue. Get Message by Id Peek Messages - View first messages",
"body": "Contents Changes Global Stuff Queues Create Queue Get Queue Update Queue Delete Queue List Queues Add or Update Subscribers Replace Subscribers Remove Subscribers Messages Post Messages - Core operation to add messages to a queue Post Messages via Webhook Reserve/Get Messages - Core operation to get message(s) off the queue. Get Message by Id Peek Messages - View first messages in queue without reserving them Delete Message - Core operation to delete a message after it's been processed Delete Messages - Batch delete Release Message - Makes a message available for another process Touch Message - Extends the timeout period so process can finish processing message Clear Messages - Removes all messages from a queue Get Push Statuses for a Message Changes Changes from v2.0.1: Per-message expirations turn into per-queue expirations Timed out and released messages go to the front of the queue. (This is not an API change, but it is a behavior change that will likely cause some tests to fail.) Push queues must be explicitly created. There's no changing a queue's type. All json objects are wrapped at the root level. All object structures changed a bit, please review json. Clear messages endpoint changed to be part of delete messages. Can no longer set timeout when posting a message, only when reserving one. Webhook url is no longer /queues/{queue name}/messages/webhook, it's now /queues/{queue name}/webhook Pagination principle in List Queues changed. It doesn’t support page parameter. You should specify the name of queue prior to the first desirable queue in result. Global Stuff Base path: /3/projects/{project_id} All requests: Headers: Content-type: application/json Authentication Headers: Authorization: OAuth TOKEN Queues Create Queue PUT /queues/{queue_name} Request: All fields are optional. type can be one of: [ multicast , unicast , pull ] where multicast and unicast define push queues. default is pull If push field is defined, this queue will be created as a push queue and must contain at least one subscriber. Everything else in the push map is optional. { \"queue\" : { \"message_timeout\" : 60 , \"message_expiration\" : 3600 , \"type\" : \"pull/unicast/multicast\" , \"push\" : { \"subscribers\" : [ { \"name\" : \"subscriber_name\" , \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_1\" , \"headers\" : { \"Content-Type\" : \"application/json\" } } ], \"retries\" : 3 , \"retries_delay\" : 60 , \"error_queue\" : \"error_queue_name\" } ] } } Response: 201 Created SAME AS GET QUEUE INFO Get Queue Info GET /queues/{queue_name} Response: 200 or 404 Some fields will not be included if they are not applicable like push if it's not a push queue and alerts if there are no alerts. { \"queue\" : { \"project_id\" : 123 , \"name\" : \"my_queue\" , \"size\" : 0 , \"total_messages\" : 0 , \"message_timeout\" : 60 , \"message_expiration\" : 604800 , \"type\" : \"pull/unicast/multicast\" , \"push\" : { \"subscribers\" : [ { \"name\" : \"subscriber_name\" , \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_1\" , \"headers\" : { \"Content-Type\" : \"application/json\" } } ], \"retries\" : 3 , \"retries_delay\" : 60 , \"error_queue\" : \"error_queue_name\" } ] } } Update Queue PATCH /queues/{queue_name} Request: SAME AS CREATE QUEUE, except queue type, which is static. Note: API raises error when you try to set subscribers to pull type queue or alerts on push queue. Response: 200 or 404 Some fields will not be included if they are not applicable like push if it's not a push queue and alerts if there are no alerts. SAME AS GET QUEUE INFO Delete Queue DELETE /queues/{queue_id} Response: 200 or 404 { \"msg\" : \"Deleted\" } List Queues GET /queues Lists queues in alphabetical order. Request URL Query Parameters: per_page - number of elements in response, default is 30. previous - this is the last queue on the previous page, it will start from the next one. If queue with specified name doesn’t exist result will contain first per_page queues that lexicographically greater than previous prefix - an optional queue prefix to search on. e.g., prefix=ca could return queues [\"cars\", \"cats\", etc.] Response: 200 or 404 { \"queues\" : [ { \"name\" : \"queue_name_here\" }, ] } Add or Update Subscribers to a Queue POST /queues/{queue_name}/subscribers Add subscribers (HTTP endpoints) to a queue. In the case subscriber with given name exists, it will be updated. Request: { \"subscribers\" : [ { \"name\" : \"first\" , \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_2\" , \"headers\" : { \"Content-Type\" : \"application/json\" } }, { \"name\" : \"other\" , \"url\" : \"http://this.host.is/not/exist\" } ] } Response: { \"msg\" : \"Updated\" } Replace Subscribers on a Queue PUT /queues/{queue_name}/subscribers Sets list of subscribers to a queue. Older subscribers will be removed. Request: { \"subscribers\" : [ { \"name\" : \"the_only\" , \"url\" : \"http://my.over9k.host.com/push\" } ] } Response: { \"msg\" : \"Updated\" } Remove Subscribers from a Queue DELETE /queues/{Queue Name}/subscribers Remove subscriber from a queue. This is for Push Queues only. Request: { \"subscribers\" : [ { \"name\" : \"other\" } ] } Response: { \"msg\" : \"Updated\" } Messages Post Messages POST /queues/{queue_name}/messages Request: { \"messages\" : [ { \"body\" : \"This is my message 1.\" , \"delay\" : 0 }, ] } Response: 201 Created Returns a list of message ids in the same order as they were sent in. { \"ids\" : [ \"2605601678238811215\" ], \"msg\" : \"Messages put on queue.\" } Post Messages via Webhook By adding the queue URL below to a third party service that supports webhooks, every webhook event that the third party posts will be added to your queue. The request body as is will be used as the \"body\" parameter in normal POST to queue above. Endpoint POST /queues/ {Queue Name} /webhook URL Parameters Project ID : The project these messages belong to. Queue Name : The name of the queue. If the queue does not exist, it will be created for you. Reserve Messages POST /queues/{queue_name}/reservations Request: All fields are optional. n: The maximum number of messages to get. Default is 1. Maximum is 100. Note: You may not receive all n messages on every request, the more sparse the queue, the less likely you are to receive all n messages. timeout: After timeout (in seconds), item will be placed back onto queue. You must delete the message from the queue to ensure it does not go back onto the queue. If not set, value from queue is used. Default is 60 seconds, minimum is 30 seconds, and maximum is 86,400 seconds (24 hours). wait: Time to long poll for messages, in seconds. Max is 30 seconds. Default 0. delete: If true, do not put each message back on to the queue after reserving. Default false. { \"n\" : 1 , \"timeout\" : 60 , \"wait\" : 0 , \"delete\" : false } Response: 200 { \"messages\" : [ { \"id\" : 123 , \"body\" : \"this is the body\" , \"reserved_count\" : 1 , \"reservation_id\" : \"def456\" }, ] } Will return an empty array if no messages are available in queue. Get Message by Id GET /queues/{queue_name}/messages/{message_id} Response: 200 TODO push queue info ? { \"message\" : { \"id\" : 123 , \"body\" : \"This is my message 1.\" , \"reserved_count\" : 1 , \"reservation_id\" : \"abcdefghijklmnop\" } } Peek Messages GET /queues/{queue_name}/messages Request: n: The maximum number of messages to peek. Default is 1. Maximum is 100. Note: You may not receive all n messages on every request, the more sparse the queue, the less likely you are to receive all n messages. Response: 200 Some fields will not be included if they are not applicable like push if it's not a push queue and alerts if there are no alerts. { \"messages\" : [ { \"id\" : 123 , \"body\" : \"message body\" , \"reserved_count\" : 1 }, ] } Delete Message DELETE /queues/{queue_name}/messages/{message_id} Request: reservation id: This id is returned when you reserve a message and must be provided to delete a message that is reserved. If a reservation times out, this will return an error when deleting so the consumer knows that some other consumer will be processing this message and can rollback or react accordingly. If the message isn't reserved, it can be deleted without the reservation id. { \"reservation_id\" : \"def456\" } Response: 200 or 404 { \"msg\" : \"Message deleted.\" } Delete Messages DELETE /queues/{queue_name}/messages This is for batch deleting messages. Maximum number of messages you can delete at once is 100. Request: reservation_id: This id is returned when you reserve a message and must be provided to delete a message that is reserved. If a reservation times out, this will return an error when deleting so the worker knows that some other worker will be processing this message and can rollback or react accordingly. { \"ids\" : [ { \"id\" : 123 , \"reservation_id\" : \"abc\" }, ] } Response: 200 or 404 { \"msg\" : \"Deleted.\" } Touch Message POST /queues/{queue_name}/messages/{message_id}/touch Request: { \"reservation_id\" : \"5259a40cf166faa76a23f7450daaf497\" } Response: 200 or 404 { \"msg\" : \"Touched\" } Release Message POST /queues/{queue_name}/messages/{message_id}/release Request: { \"reservation_id\" : \"5259a40cf166faa76a23f7450daaf497\" , \"delay\" : 60 } Response: 200 or 404 { \"msg\" : \"Released\" } Clear Messages DELETE /queues/{queue_name}/messages This will remove all messages from a queue. Request: {} Response: 200 or 404 { \"msg\" : \"Cleared\" } Get Push Statuses for a Message GET /queues/{queue_name}/messages/{message_id}/subscribers You can retrieve the push statuses for a particular message which will let you know which subscribers have received the message, which have failed, how many times it's tried to be delivered and the status code returned from the endpoint. Response: { \"subscribers\" : [ { \"name\" : \"first\" \"retries_remaining\" : 2 , \"retries_total\" : 6 , \"status_code\" : 200 , \"url\" : \"http://mysterious-brook-1807.herokuapp.com/ironmq_push_2\" , \"last_try_at\" : \"2014-07-30T15:45:03Z\" }, { \"name\" : \"other\" \"retries_remaining\" : 2 , \"retries_total\" : 6 , \"status_code\" : 200 , \"url\" : \"http://this.host.is/not/exist\" , \"last_try_at\" : \"2014-07-30T15:44:29Z\" } ] } "
}, {
"title": "IronAuth API",
"url": "/mq/3/reference/auth_api/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Objects (json) Not all fields are required. < token > { \"_id\" \"user_id\" \"type\" \"name\" \"token\" \"admin\" bool } < project > { \"id\" \"user_id\" \"name\" \"type\" \"partner\" \"status\" \"total_duration\" \"max_schedules\" \"schedules_count\" \"task_count\" \"hourly_task\\_count\" \"hourly_time\" time . Time \"flags\" map [ string ] bool \"shared_with\" [] id } < user > { \"user_id\" \"email\" \"password\" \"tokens\" [] string \"status\" \"plan_worker\"",
"body": "Objects (json) Not all fields are required. < token > { \"_id\" \"user_id\" \"type\" \"name\" \"token\" \"admin\" bool } < project > { \"id\" \"user_id\" \"name\" \"type\" \"partner\" \"status\" \"total_duration\" \"max_schedules\" \"schedules_count\" \"task_count\" \"hourly_task\\_count\" \"hourly_time\" time . Time \"flags\" map [ string ] bool \"shared_with\" [] id } < user > { \"user_id\" \"email\" \"password\" \"tokens\" [] string \"status\" \"plan_worker\" \"flags\" map [ string ] interface {} } Endpoints Authentication HEADER: Authorization alternatively (add query string of oauth ) project_id : Request Query String GET /1/authentication Response: 200 or 403 {} Login (for HUD) HEADER: application/json (no token/oauth) POST /1/authentication Request: request : { email : < email > , password : < password > } Response: response : { user object } All other endpoints require Authorization HEADER Tokens POST /1/tokens request : { < token > } response : { < token > } DELETE /1/tokens/{token_id} response : { msg : success / fail } Users POST /1/users js request: { email: <insert user email> password: <user password> } response: { <user> } GET /1/users URL query params: previous : to paginate, the id of the last user from the last page; if not specified, will start from beginning. per_page : size of the list to return. Default: 30, max: 100. response : { \"users\" : [ < user1 > , < user2 > , ... ] } GET /1/users/{user_id_or_email} response : { < user > } PATCH /1/users/{user_id_or_email} request : { email : < optional field > password : < optional field > } response : { < user > } DELETE /1/users/{user_id_or_email} response : { msg : \"success/fail\" } Projects POST /1/projects request : { name : < insert project name > } response : { < project > } GET /1/projects/{project_id} response : { < project > } DELETE /1/projects/{project_id} response : { msg : success / fail } PATCH /1/projects/{project_id}/share PATCH /1/projects/{project_id}/unshare request : { [] user_id } "
}, {
"title": "IronMQ On-Premise Official Client Libraries",
"url": "/mq/3/reference/client_libraries/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Official Client Libraries These are our official client libraries for IronMQ Enterprise REST/HTTP API . Go Ruby .Net Java PHP Client Configuration Many of the client libraries make use of a global configuration scheme for all of Iron.io services. This approach lets you set and manage your tokens and project IDs in a centralized manner and make them available across",
"body": "Official Client Libraries These are our official client libraries for IronMQ Enterprise REST/HTTP API . Go Ruby .Net Java PHP Client Configuration Many of the client libraries make use of a global configuration scheme for all of Iron.io services. This approach lets you set and manage your tokens and project IDs in a centralized manner and make them available across all of Iron.io's services, even across workspaces. This scheme allows you to spend less time on configuration issues and more on writing code. It also supports the design pattern that calls for strict separation of configuration information from application code. The two most common variables used in configuration are the project ID and the token . The project ID is a unique identifier for your project and can be found in the HUD . The token is one of your OAuth2 tokens, which can be found on their own page in the HUD. Table of Contents Quick Start About the Scheme The Overall Hierarchy The Environment Variables The File Hierarchy The JSON Hierarchy Example Setting Host Example Accepted Values Quick Start Create a file called .iron.json in your home directory (i.e., ~/.iron.json ) and enter your Iron.io credentials: .iron.json { \"token\" : \"MY_TOKEN\" , \"project_id\" : \"MY_PROJECT_ID\" } The project_id you use will be the default project to use. You can always override this in your code. Alternatively, you can set the following environment variables: IRON_TOKEN = MY_TOKEN IRON_PROJECT_ID = MY_PROJECT_ID That's it, now you can get started. About the Scheme The configuration scheme consists of three hierarchies: the file hierarchy, the JSON hierarchy, and the overall hierarchy. By understanding these three hierarchies and how clients determine the final configuration values, you can build a powerful system that saves you redundant configuration while allowing edge cases. The Overall Hierarchy The overall hierarchy is simple to understand: local takes precedence over global. The configuration is constructed as follows: The global configuration file sets the defaults according to the file hierarchy. The global environment variables overwrite the global configuration file's values. The product-specific environment variables overwrite everything before them. The local configuration file overwrites everything before it according to the file hierarchy. The configuration file specified when instantiating the client library overwrites everything before it according to the file hierarchy. The arguments passed when instantiating the client library overwrite everything before them. The Client's Environment Variables set in iron.json The environment variables the scheme looks for are all of the same formula: the camel-cased product name is switched to an underscore (\"IronWorker\" becomes \"iron_worker\") and converted to be all capital letters. For the global environment variables, \"IRON\" is used by itself. The value being loaded is then joined by an underscore to the name, and again capitalised. For example, to retrieve the OAuth token, the client looks for \"IRON_TOKEN\". In the case of product-specific variables (which override global variables), it would be \"IRON_WORKER_TOKEN\" (for IronWorker). Accepted Values The configuration scheme looks for the following values: project_id : The ID of the project to use for requests. token : The OAuth token that should be used to authenticate requests. Can be found in the HUD . host : The domain name the API can be located at. Defaults to a product-specific value, but always using Amazon's cloud. protocol : The protocol that will be used to communicate with the API. Defaults to \"https\", which should be sufficient for 99% of users. port : The port to connect to the API through. Defaults to 443, which should be sufficient for 99% of users. api_version : The version of the API to connect through. Defaults to the version supported by the client. End-users should probably never change this. Note that only the project_id and token values need to be set. They do not need to be set at every level of the configuration, but they must be set at least once by the levels that are used in any given configuration. It is recommended that you specify a default project_id and token in your iron.json file. The File Hierarchy The hierarchy of files is simple enough: if a file named .iron.json exists in your home folder, that will provide the defaults. if a file named iron.json exists in the same directory as the script being run, that will be used to overwrite the values from the .iron.json file in your home folder. Any values in iron.json that are not found in .iron.json will be added; any values in .iron.json that are not found in iron.json will be left alone; any values in .iron.json that are found in iron.json will be replaced with the values in iron.json . This allows a lot of flexibility: you can specify a token that will be used globally (in .iron.json ), then specify the project ID for each project in its own iron.json file. You can set a default project ID, but overwrite it for that one project that uses a different project ID. The JSON Hierarchy Each file consists of a single JSON object, potentially with many sub-objects. The JSON hierarchy works in a similar manner to the file hierarchy: the top level provides the defaults. If the top level contains a JSON object whose key is an Iron.io service ( iron_worker , iron_mq , or iron_cache ), that will be used to overwrite those defaults when one of their clients loads the config file. This allows you to define a project ID once and have two of the services use it, but have the third use a different project ID. Example In the event that you wanted to set a token that would be used globally, you would set ~/.iron.json to look like this: .iron.json { \"token\" : \"YOUR TOKEN HERE\" } To follow this up by setting your project ID for each project, you would create an iron.json file in each project's directory: iron.json { \"project_id\" : \"PROJECT ID HERE\" } If, for one project, you want to use a different token, simply include it in that project's iron.json file: iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" } Now for that project and that project only , the new token will be used. If you want all your IronCache projects to use a different project ID, you can put that in the ~/.iron.json file: .iron.json { \"project_id\" : \"GLOBAL PROJECT ID\" , \"iron_cache\" : { \"project_id\" : \"IRONCACHE ONLY PROJECT ID\" } } If you don't want to write things to disk or on Heroku or a similar platform that has integrated with Iron.io to provide your project ID and token automatically, the library will pick them up for you automatically. Setting Host It is useful to quickly change your host in cases where your region has gone down. If want to set the Host, Post, and Protocol specifically, simply include those keys in that project's iron.json file: iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" \"port\" : 443 , \"protocol\" : \"https\" , \"host\" : \"mq-rackspace-ord.iron.io\" } "
}, {
"title": "Iron.io Search",
"url": "/search/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Search results",
"body": " Search results "
}, {
"title": "Solutions",
"url": "/solutions/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Sending Email & Notifications Sending notifications is key to delivering great service. A growing user base means distributing the effort and shrinking the time it takes to get emails and messages to your users. Learn More ➟ Image Processing Processing images is a common need in social apps. Whether it’s generating thumbnails, resizing photos, or adding effects, Iron.io can help",
"body": " Sending Email & Notifications Sending notifications is key to delivering great service. A growing user base means distributing the effort and shrinking the time it takes to get emails and messages to your users. Learn More ➟ Image Processing Processing images is a common need in social apps. Whether it’s generating thumbnails, resizing photos, or adding effects, Iron.io can help you offload and scale out the effort. Learn More ➟ "
}, {
"title": null,
"url": "/support/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Iron.io has three support channels: 1. Real-time Live Chat Drop into our chat room to get help directly from our engineers: http://hud.iron.io/users/support 2. Public Forums We've decided to use Stack Overflow for users to ask questions and get answers. We'll be monitoring the following tags: ironworker , ironmq , ironcache or iron.io so be sure to tag your questions so",
"body": "Iron.io has three support channels: 1. Real-time Live Chat Drop into our chat room to get help directly from our engineers: http://hud.iron.io/users/support 2. Public Forums We've decided to use Stack Overflow for users to ask questions and get answers. We'll be monitoring the following tags: ironworker , ironmq , ironcache or iron.io so be sure to tag your questions so we can find them. 3. Private Questions Please feel free to contact us directly by emailing [email protected] . Stay Informed Our Blog Google+ Twitter "
}, {
"title": "Worker Samples & Examples",
"url": "/worker/examples/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "This page lists a bunch of places to find example code that you can copy and modify to suit your needs or just to get a feel for how to make workers. Check out our Post on Top 10 Use Cases for IronWorker Here! List of example repositories Examples Repository - A repository full of example workers in a bunch",
"body": "This page lists a bunch of places to find example code that you can copy and modify to suit your needs or just to get a feel for how to make workers. Check out our Post on Top 10 Use Cases for IronWorker Here! List of example repositories Examples Repository - A repository full of example workers in a bunch of different languages. Image processing with PHP - An image processing application that shows how to process images in a scalable way using IronWorker. LIVE DEMO Newsy - An example app that pulls the latest URLs from hacker news, takes a screenshot, and pushes them to a browser. Uses Python for the UI and Node with PhantomJS for the workers. LIVE DEMO Using IronWorker in a Ruby on Rails application Koders - A Ruby app using Sinatra and IronWorker to pull top Stackoverflow users then finds the languages they use on Github to get the top languages for the top SO users. Contributing To add an example to this list, just fork our Dev Center repository , add your example to this page, then submit a pull request. "
}, {
"title": "IronWorker Documentation",
"url": "/worker/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Offload your tasks to the parallel-processing power of the elastic cloud. Write your code, then queue tasks against it—no servers to manage, no scaling to worry about. Check out our Post on Top 10 Use Cases for IronWorker Here! 1 Write Your Worker 2 Create and Upload Your Code Package 3 Queue/Schedule Your Task 4 Inspect Your Worker 1. Write",
"body": "Offload your tasks to the parallel-processing power of the elastic cloud. Write your code, then queue tasks against it—no servers to manage, no scaling to worry about. Check out our Post on Top 10 Use Cases for IronWorker Here! 1 Write Your Worker 2 Create and Upload Your Code Package 3 Queue/Schedule Your Task 4 Inspect Your Worker 1. Write Your Worker IronWorker's environment is just a Linux sandbox that your task is executed in. Anything you write that runs on your machine, if packaged correctly, should be able to be run in IronWorker. For example workers, check out the examples repository or the page for your favorite language versions and runtime environments . 2. Create and Upload Your Code Package You need to package all your code and its dependencies together and upload it to IronWorker before it can be run—your code is run entirely on IronWorker's servers, so it's important that you package all the dependencies of your code with it. There are several parts to this step, but you only need to do this when you want to update your worker's code, so you shouldn't have to do it very often. Note: You only need to upload your code package when your code changes . Queuing tasks does not require you to upload the code package again. The suggested way to do this is through a .worker (pronounced \"dotworker\") file. There's extensive documentation for .worker files, but the snippet below should be enough to get you started. Just save it as \" firstworker.worker \" in the same directory as your code. firstworker.worker runtime \"ruby\" # The runtime the code should be run under: ruby, python, php, or sh exec \"path/to/file.rb\" # The file to execute when a task is queued. Your worker's entry point file \"config.json\" # The path to a file to upload as a dependency of the worker; just leave this out if you don't have any dependencies. Note: You should never have a file named just \".worker\". Always use a unique, recognisable name—it's what your code package will be named. \"helloworld.worker\" will create a code package named \"helloworld\", \"turnintoaunicorn.worker\" will create a code package named \"turnintoaunicorn\", etc. Once you've defined your worker and its dependencies with a .worker file, you can upload it using the command line tool for IronWorker. Note: You'll need to have Ruby 1.9+ installed to use the IronWorker CLI. After that, just run \" gem install iron_worker_ng \" to get the tool. To interact with the IronWorker API, you'll need your project's ID and an auth token from the HUD . Once you retrieve them, you need to configure the CLI to use them . Create an iron.json file in the same folder as your firstworker.worker file that looks like this: iron.json { \"project_id\" : \"INSERT YOUR PROJECT ID HERE\" , \"token\" : \"INSERT YOUR AUTH TOKEN HERE\" } Once that's done, you can just run the following command: Command Line $ iron_worker upload firstworker That will upload the code package to IronWorker and name it \"firstworker\"—you'll use the name to queue tasks to the package. That's it, your code is ready to be run! 3. Queue/Schedule Your Task Now that you've uploaded your worker, you can just queue tasks against it to run them, in parallel, in the cloud. You can queue a task directly from your favorite language versions and runtime environments or from the command line: Command Line $ iron_worker queue firstworker You can also specify a payload, which is a string that will be supplied to your worker while it runs the task, letting you pass in information. Almost all payloads are JSON strings: Command Line $ iron_worker queue firstworker -p '{\"key1\": \"val1\", \"obj1\": {\"key2\": \"val2\"}, \"arr1\": [\"item1\", \"item2\"]}' Most clients—including the CLI—will automatically handle parsing the payload in your worker, so you can just access the variable or function they give you in your worker's code. We have more information on payloads, if you're curious. Protip: we also offer a webhook endpoint —it's great for automating tasks. Check out our blog post for more information on this. Sometimes you want tasks that repeat periodically, or that will be queued (queued, not executed) at a specific time or even date. IronWorker supports scheduled tasks for this purpose. They're just like regular tasks (payloads, executed in the cloud, in parallel), but they're queued on your schedule. Again, you can schedule a task from your your favorite language versions and runtime environments or from the command line: Command Line $ iron_worker schedule firstworker --start-at \"2012-07-19T23:51:00-04:00\" If you want to just start a task after a short delay, you can do that too. Just specify --delay followed by the number of seconds the task should be delayed. Command Line $ iron_worker schedule firstworker --delay 120 If you want to have a task repeat, you can just specify the --run-every option, followed by the number of seconds between each run: Command Line $ iron_worker schedule firstworker --run-every 60 There are a lot of options to configure scheduling tasks; check our more detailed guide to see more of the options. 4. Inspect Your Worker (Logging) Logging to STDOUT (default) In a perfect world, your workers would just work. Sometimes though, workers have bugs in them or an API is down or something goes wrong. In these situations, it's helpful to have some debugging tools. To aid in debugging, everything that is printed to STDOUT (everything from puts or print or echo or your language's equivalent) in a worker is logged in the HUD . Also, in case you think your package wasn't built correctly or forget which version of your worker is running, the HUD offers downloads for every revision of every code package you upload. Logging to External Services Sometimes it is more helpful to have all logs in one place. Say you have a big web application and want to consolidate the logs of all your tasks and run global searches. We make that super simple to do. Read this blog article on how to setup real-time remote logging to external services. In the article, Papertrail is used as an example but you can send your log output to any syslog endpoint and see it in real-time. You can run your own syslog server with something like syslogd or Splunk , or you can use popular logging services such as Papertrail or Loggly . Next Steps You should be well-grounded in the IronWorker paradigm now—now you need to build something cool! Check out our runtime/languagea . documentation or reference material to explore the boundaries of IronWorker's system. If you're looking for ideas about what you can accomplish with IronWorker, you may want to check out our solutions . "
}, {
"title": "Integrations",
"url": "/worker/integrations/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "This page contains different kind of integrations for IronWorker. Including IronMQ, IronCache and 3rdparty services. Iron.io Services It is very easy to integrate IronMQ or IronCache into your worker. It includes 3 steps. Get the client library for your language: IronMQ libraries . IronCache libraries . Go to Geting Started page to instructions. It works as any other library for",
"body": "This page contains different kind of integrations for IronWorker. Including IronMQ, IronCache and 3rdparty services. Iron.io Services It is very easy to integrate IronMQ or IronCache into your worker. It includes 3 steps. Get the client library for your language: IronMQ libraries . IronCache libraries . Go to Geting Started page to instructions. It works as any other library for your programming language: IronMQ Overview . Use orange button at top right to select the language you prefer. IronCache documentation is on client libraries' pages. Basics & Ruby gem integration are available here . Create your worker. Make sure to include your configuration file in the worker to provide your credentials. Upload and queue your worker. Now you can login in the HUD and check worker's log. 3rdparty Services IronWorker is available at several 3rdparty services as add-on. It is also possible to use your Iron.io account instead of adding the add-on. List of engines: AppFog IronWorker add-on CloudControl EngineYard Heroku IronWorker StackMob IronWorker "
}, {
"title": "IronCasts",
"url": "/worker/iron_casts/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "This page lists past IronCasts series. The screencasts are intended to be short and focused on some of the most commonly asked questions. IronCast Series 1 - Introduction to IronWorker In a series of four IronCasts, we will provide a high-level introduction to using IronWorker. IronWorker is an easy-to-use scalable task queue that gives cloud developers a simple way to",
"body": "This page lists past IronCasts series. The screencasts are intended to be short and focused on some of the most commonly asked questions. IronCast Series 1 - Introduction to IronWorker In a series of four IronCasts, we will provide a high-level introduction to using IronWorker. IronWorker is an easy-to-use scalable task queue that gives cloud developers a simple way to offload front-end tasks, run scheduled jobs, and process tasks in the background and at scale. These videocasts will cover core concepts including: - Deploying a worker - Writing worker files to declare dependencies - Test and prototype workers rapidly locally - Connecting to a cloud development database We will be using an example application which is written in Rails. However, the same concept applies to every language or framework. IronWorker can handle almost every language including binary files and so if you program in PHP,Python, Node.js, or other languages, don't worry, we have client libraries and examples to show you the way. It should also be possible to convert this example to the language of your choice without much effort. Please refer to further documentation here. "
}, {
"title": "Writing Workers in .NET",
"url": "/worker/languages/dotnet/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": ".NET is a framework from Microsoft that is the de-facto standard for writing software that runs on Windows, Windows Server, and Windows Phone. Now you can integrate your existing .NET codebase with IronWorker, without needing to learn a new language. This article will walk you through getting .NET workers running on IronWorker, but you should still take the time to",
"body": ".NET is a framework from Microsoft that is the de-facto standard for writing software that runs on Windows, Windows Server, and Windows Phone. Now you can integrate your existing .NET codebase with IronWorker, without needing to learn a new language. This article will walk you through getting .NET workers running on IronWorker, but you should still take the time to familiarise yourself with the basics of IronWorker . Table of Contents Quick Start Get the CLI Create Your Configuration File Write Your .NET Worker Compile Your .NET Worker Create a .worker File Upload Your Worker Queue Up Tasks for Your Worker Deep Dive Payload Example Quick Start Get the CLI We've created a command line interface to the IronWorker service that makes working with the service a lot easier and more convenient. It does, however, require you to have Ruby 1.9+ installed and to install the iron_worker_ng gem. Once Ruby 1.9+ is installed, you can just the following command to get the gem: Command Line $ gem install iron_worker_ng Create Your Configuration File The CLI needs a configuration file or environment variables set that tell it what your credentials are. We have documentation about how this works, but for simplicity's sake, just save the following as iron.json in the same folder as your .worker file: { \"project_id\" : \"INSERT YOUR PROJECT ID HERE\" , \"token\" : \"INSERT YOUR TOKEN HERE\" } You should insert your project ID and token into that iron.json file. Then, assuming you're running the commands from within the folder, the CLI will pick up your credentials and use them automatically. Write Your .NET Worker public class HelloWorld { static public void Main ( string [] args ) { System . Console . WriteLine ( \"Hello World from .NET!\" ); } } Compile Your .NET Worker For .NET code, IronWorker runs the compiled executables in the cloud, so you're going to need to generate the executable. It's likely your development environment (e.g. Visual Studio) has a simple way to do this; that will work just fine. If you're a Mono users, use gmcs : gmcs hello.cs Create a .worker File Worker files are a simple way to define your worker and its dependencies. Save the following in a file called hello.worker : # set the runtime language; this should be \"mono\" for .NET workers runtime \"mono\" # exec is the file that will be executed when you queue a task exec \"hello.exe\" # replace with your file Upload Your Worker iron_worker upload hello That command will read your .worker file, create your worker code package and upload it to IronWorker. Head over to hud.iron.io , click the Worker link on your projects list, then click the Tasks tab. You should see your new worker listed there with zero runs. Click on it to show the task list, which will be empty, but not for long. Let’s quickly test it by running: iron_worker queue hello Now look at the task list in HUD and you should see your task show up and go from \"queued\" to \"running\" to \"completed\". Now that we know it works, let’s queue up a bunch of tasks from code. Note : Once you upload a code package, you can queue as many tasks as you'd like against it. You only need to re-upload the code package when your code changes. Queue Up Tasks for Your Worker Once your code has been uploaded, it's easy to queue a task to it. It's a single, authenticated POST request with a JSON object. The example below queues up a task for your worker. Just insert your project ID and token at the bottom (that third argument is the name of your worker). using System ; using System.Net ; public class QueueTask { private static string queue_task ( string projectId , string token , string worker ) { string uri = \"https://worker-us-east.iron.io:443/2/projects/\" + projectId + \"/tasks\" ; HttpWebRequest request = ( HttpWebRequest ) HttpWebRequest . Create ( uri ); request . ContentType = \"application/json\" ; request . Headers . Add ( \"Authorization\" , \"OAuth \" + token ); request . UserAgent = \"IronMQ .Net Client\" ; request . Method = \"POST\" ; // We hand code the JSON payload here. You can automatically convert it, if you prefer string body = \"{\\\"tasks\\\": [ { \\\"code_name\\\": \\\"\" + worker + \"\\\", \\\"payload\\\": \\\"{\\\\\\\"key\\\\\\\": \\\\\\\"value\\\\\\\", \\\\\\\"fruits\\\\\\\": [\\\\\\\"apples\\\\\\\", \\\\\\\"oranges\\\\\\\"]}\\\"} ] }\" ; if ( body != null ) { using ( System . IO . StreamWriter write = new System . IO . StreamWriter ( request . GetRequestStream ())) { write . Write ( body ); write . Flush (); } } HttpWebResponse response = ( HttpWebResponse ) request . GetResponse (); using ( System . IO . StreamReader reader = new System . IO . StreamReader ( response . GetResponseStream ())) { return reader . ReadToEnd (); } } static public void Main ( string [] args ) { Console . WriteLine ( queue_task ( \"INSERT PROJECT ID\" , \"INSERT TOKEN\" , \"hello\" )); } } Save this as \"enqueue.cs\", compile it, and run it to queue up the task to your worker. You should get a response similar to this: { \"msg\" : \"Queued up\" , \"tasks\" : [{ \"id\" : \"506e1a8e29a33a57650db95d\" }]} For most people, calling the API by hand is overkill. We don't have an official IronWorker library for .NET yet, but our community has built a great project for interacting with our APIs. If you're using Iron.io from .NET, you may wish to check out IronTools . Note: One of our customers, Oscar Deits lent us his considerable expertise with .NET as we came up with this sample code. Thanks Oscar! Deep Dive Payload Example Retrieving the payload in .NET is the same as it is on any other language. Retrieve the -payload argument passed to the script, load that file, and parse it as JSON. Note : This script only parses payloads that consist of strings in a key/value pair. Implementing more advanced parsing is an exercise left to the reader. using System ; using System.IO ; using System.Linq ; using System.Collections.Generic ; using System.Web.Script.Serialization ; public class HelloWorld { static public void Main ( string [] args ) { int ind = Array . IndexOf ( args , \"-payload\" ); if ( ind >= 0 && ( ind + 1 ) < args . Length ){ string path = args [ ind + 1 ]; string payload = File . ReadAllText ( path ); JavaScriptSerializer serializer = new JavaScriptSerializer (); IDictionary < string , string > json = serializer . Deserialize < Dictionary < string , string >>( payload ); foreach ( string key in json . Keys ) { Console . WriteLine ( key + \" = \" + json [ key ] ); } } } } You'll notice that we're using the System.Web.Script assembly in the payload example; you'll need to specify that when compiling the binary. System.Web.Script lives in System.Web.Extensions.dll, so the command looks like this: gmcs payloadworker.cs -r:System.Web.Extensions.dll "
}, {
"title": "Writing Workers in Go",
"url": "/worker/languages/go/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "The Go programming language is a fast, statically typed, compiled language with an emphasis on concurrency. It's a great language for cloud systems (we use it here at Iron.io!) and is a natural fit for workers. Go Workers need to be compiled, then uploaded. Once they're uploaded to the IronWorker cloud, they can be invoked via a simple API to",
"body": "The Go programming language is a fast, statically typed, compiled language with an emphasis on concurrency. It's a great language for cloud systems (we use it here at Iron.io!) and is a natural fit for workers. Go Workers need to be compiled, then uploaded. Once they're uploaded to the IronWorker cloud, they can be invoked via a simple API to be put on the processing queues immediately or scheduled to run at a later time—you only need to upload the worker again when the code changes. This article will walk you through the specifics of things, but you should be familiar with the basics of IronWorker . Note : we don't use it for this walkthrough, but there's a great library for working with the IronWorker API in Go. If working with raw HTTP requests doesn't sound like fun to you, check it out. Table of Contents Quick Start Get the CLI Create Your Configuration File Write Your Go Worker Compile Your Go Worker to a Binary File Create a .worker File Upload Your Worker Queue Up Tasks for Your Worker Deep Dive Payload Example Cross Compiling Quick Start Get the CLI We've created a command line interface to the IronWorker service that makes working with the service a lot easier and more convenient. It does, however, require you to have Ruby 1.9+ installed and to install the iron_worker_ng gem. Once Ruby 1.9+ is installed, you can just the following command to get the gem: Command Line $ gem install iron_worker_ng Create Your Configuration File The CLI needs a configuration file or environment variables set that tell it what your credentials are. We have some pretty good documentation about how this works, but for simplicity's sake, just save the following as iron.json in the same folder as your .worker file: iron.json { \"project_id\" : \"INSERT YOUR PROJECT ID HERE\" , \"token\" : \"INSERT YOUR TOKEN HERE\" } You should insert your project ID and token into that iron.json file. Then, assuming you're running the commands from within the folder, the CLI will pick up your credentials and use them automatically. Write Your Go Worker hello_worker.go package main import \"fmt\" func main () { fmt . Println ( \"Hello World from Go.\" ) } Compile Your Go Worker to a Binary File You may need to recompile Go with GOOS=linux , GOARCH=amd64 , and CGO_ENABLED=0 before you can cross compile from Windows, Mac, or a 32 bit machine. GOOS = linux GOARCH = amd64 go build Create a .worker File Worker files are a simple way to define your worker and its dependencies. Save the following in a file called hello.worker : hello.worker # set the runtime language; this should be \"binary\" for Go workers runtime \"binary\" # exec is the file that will be executed when you queue a task exec \"hello_worker\" # replace with your Go executable Upload Your Worker Command Line $ iron_worker upload hello That command will read your .worker file, create your worker code package and upload it to IronWorker. Head over to hud.iron.io , click the Worker link on your projects list, then click the Tasks tab. You should see your new worker listed there with zero runs. Click on it to show the task list which will be empty, but not for long. Let’s quickly test it by running: iron_worker queue hello Now look at the task list in HUD and you should see your task show up and go from \"queued\" to \"running\" to \"completed\". Now that we know it works, let’s queue up a bunch of tasks from code. Note : Once you upload a code package, you can queue as many tasks as you'd like against it. You only need to re-upload the code package when your code changes. Queue Up Tasks for Your Worker Once your code has been uploaded, it's easy to queue a task to it. It's a single, authenticated POST request with a JSON object. The following program will queue up a task to your worker; just insert your token and project ID into the code. enqueue.go package main import ( \"fmt\" \"net/http\" \"io/ioutil\" \"encoding/json\" \"bytes\" ) type Task struct { CodeName string `json:\"code_name\"` Payload string `json:\"payload\"` } type ReqData struct { Tasks [] * Task `json:\"tasks\"` } func main () { const token = \"INSERT TOKEN HERE\" const project = \"INSERT PROJECT ID HERE\" // Insert our project ID and token into the API endpoint target := fmt . Sprintf ( \"http://worker-us-east.iron.io/2/projects/%s/tasks?oauth=%s\" , project , token ) // Build the payload // The payload is a string to pass information into your worker as part of a task // It generally is a JSON-serialized string (which is what we're doing here) that can be deserialized in the worker payload := map [ string ] interface {} { \"arg1\" : \"Test\" , \"another_arg\" : [] string { \"apples\" , \"oranges\" }, } payload_bytes , err := json . Marshal ( payload ) if err != nil { panic ( err . Error ()) } payload_str := string ( payload_bytes ) // Build the task task := & Task { CodeName : \"GoWorker\" , Payload : payload_str , } // Build a request containing the task json_data := & ReqData { Tasks : [] * Task { task }, } json_bytes , err := json . Marshal ( json_data ) if err != nil { panic ( err . Error ()) } json_str := string ( json_bytes ) // Post expects a Reader json_buf := bytes . NewBufferString ( json_str ) // Make the request resp , err := http . Post ( target , \"application/json\" , json_buf ) if err != nil { panic ( err . Error ()) } defer resp . Body . Close () // Read the response resp_body , err := ioutil . ReadAll ( resp . Body ) if err != nil { panic ( err . Error ()) } // Print the response to STDOUT fmt . Println ( string ( resp_body )) } Save this as \"enqueue.go\" and use go run enqueue.go to queue up the task for your worker. You should get a response similar to this: { \"msg\" : \"Queued up\" , \"status_code\" : 200 , \"tasks\" : [{ \"id\" : \"4f9b51631bab47589b017391\" }]} If you check in the HUD , you should see the task. Deep Dive Payload Example Retrieving the payload from within the worker on Go is the same as it is on any other language. Retrieve the -payload argument passed to the script, load that file, and parse it as JSON. payload.go package main import ( \"io/ioutil\" \"os\" \"fmt\" \"encoding/json\" ) func main () { payloadIndex := 0 for index , arg := range ( os . Args ) { if arg == \"-payload\" { payloadIndex = index + 1 } } if payloadIndex >= len ( os . Args ) { panic ( \"No payload value.\" ) } payload := os . Args [ payloadIndex ] var data interface {} raw , err := ioutil . ReadFile ( payload ) if err != nil { panic ( err . Error ()) } err = json . Unmarshal ( raw , & data ) if err != nil { panic ( err . Error ()) } fmt . Printf ( \"%v\\n\" , data ) } Cross Compiling To make a binary distribution that runs on the IronWorker cloud, it's often necessary to compile your Go executable for a system different from your native system—unless you're running 64 bit Linux, the binaries you generate won't be executable on IronWorker's cloud. The solution to this is \"cross compile\" your Go Workers. By recompiling Go with specific flags set, you can compile binaries that will work on IronWorker. You can find more information on that in the Go mailing list . The GOOS value should be set to linux and the GOARCH value should be set to amd64 . Note that you must disable cgo to cross compile Go. This means that certain packages ( net being the most notable) will take a performance hit. "
}, {
"title": "Writing Workers in Java",
"url": "/worker/languages/java/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Java has become one of the most popular languages in the enterprise. With Java workers, you can use the same tools your enterprise software uses, but with the power of the cloud behind it. Java workers need to be compiled into jar files before they're uploaded. Once they're uploaded to the IronWorker cloud, they can be invoked via a simple",
"body": "Java has become one of the most popular languages in the enterprise. With Java workers, you can use the same tools your enterprise software uses, but with the power of the cloud behind it. Java workers need to be compiled into jar files before they're uploaded. Once they're uploaded to the IronWorker cloud, they can be invoked via a simple API call to be put on the processing queues immediately or scheduled to run at a later time—you only need to upload the worker again when the code changes. This article will walk you through the specifics of using Java workers, but you should be familiar with the basics of IronWorker . Table of Contents Quick Start Get the CLI Create Your Configuration File Write Your Java Worker Compile Your Java Worker to a jar File Create a .worker File Upload Your Worker Queue Up Tasks for Your Worker Deep Dive Payload Example Get GSON Modify The Worker Recompile the jar File Update the .worker File and Reupload Quick Start Get the CLI We've created a command line interface to the IronWorker service that makes working with the service a lot easier and more convenient. It does, however, require you to have Ruby 1.9+ installed and to install the iron_worker_ng gem. Once Ruby 1.9+ is installed, you can just the following command to get the gem: Command Line $ gem install iron_worker_ng Create Your Configuration File The CLI needs a configuration file or environment variables set that tell it what your credentials are. We have some pretty good documentation about how this works, but for simplicity's sake, just save the following as iron.json in the same folder as your .worker file: iron.json { \"project_id\" : \"INSERT YOUR PROJECT ID HERE\" , \"token\" : \"INSERT YOUR TOKEN HERE\" } You should insert your project ID and token into that iron.json file. Then, assuming you're running the commands from within the folder, the CLI will pick up your credentials and use them automatically. Write Your Java Worker HelloWorld.java public class HelloWorld { public static void main ( String [] args ) { System . out . println ( \"Hello World from Java\" ); } } Compile Your Java Worker to a jar File IronWorker runs jar files that you upload to the cloud. You need to generate these jar files first, however. It's likely your development environment already has a simple method for generating these files, but in case it doesn't, you can generate them from the command line. First, create a manifest.txt file in the same directory as your Worker. Put the following in it: Main-Class: HelloWorld Then run the following commands: Command Line $ javac HelloWorld.java $ jar cfm hello.jar manifest.txt HelloWorld.class A hello.jar file will now be in the same directory as your worker. Create a .worker File Worker files are a simple way to define your worker and its dependencies. Save the following in a file called hello.worker : hello.worker # set the runtime language; this should be \"java\" for Java workers runtime \"java\" # exec is the file that will be executed when you queue a task exec \"hello.jar\" # replace with your jar file Upload Your Worker Command Line $ iron_worker upload hello That command will read your .worker file, create your worker code package and upload it to IronWorker. Head over to hud.iron.io , click the Worker link on your projects list, then click the Tasks tab. You should see your new worker listed there with zero runs. Click on it to show the task list which will be empty, but not for long. Let’s quickly test it by running: iron_worker queue hello Now look at the task list in HUD and you should see your task show up and go from \"queued\" to \"running\" to \"completed\". Now that we know it works, let’s queue up a bunch of tasks from code. Note : Once you upload a code package, you can queue as many tasks as you'd like against it. You only need to re-upload the code package when your code changes. Queue Up Tasks for Your Worker Once your code has been uploaded, it's easy to queue a task to it. The following example will queue up a task using the iron_worker_java library. Just insert your token and project ID into the code. Enqueue.java import io.iron.ironworker.client.Client ; import io.iron.ironworker.client.entities.TaskEntity ; import io.iron.ironworker.client.builders.Params ; import io.iron.ironworker.client.builders.TaskOptions ; import io.iron.ironworker.client.APIException ; public class Enqueue { public static void main ( String [] args ) throws APIException { Client client = new Client ( \"INSERT TOKEN HERE\" , \"INSERT PROJECT ID HERE\" ); TaskEntity t = client . createTask ( \"JavaWorker\" , Params . add ( \"arg1\" , \"Test\" ). add ( \"another_arg\" , new String []{ \"apples\" , \"oranges\" })); System . out . println ( t . getId ()); } } Save that as \"Enqueue.java\" and compile it. Run the compiled code (usually java Enqueue , but your IDE may have an easier way to run your code) and you'll see the queued task's ID printed. Deep Dive Payload Example Retrieving the payload in Java is largely the same as it is on any other language. Retrieve the -payload argument passed to the script, load that file, and parse it as JSON. Java doesn't play nicely with JSON, however, so this takes a little more work for Java than it does for the other languages. Get GSON First, you're going to need the GSON library—this is a library that Google released that can take JSON and turn it into Java objects, and vice-versa. Go ahead and download the latest release, unzip it, and copy the gson-#.#.jar file to the directory your worker is in. Rename the jar file to gson.jar, to make life easier. Modify The Worker Next, we're going to modify your worker to load the file and parse it as JSON: Enqueue.java import java.io.File ; import java.io.IOException ; import java.io.FileInputStream ; import java.nio.MappedByteBuffer ; import java.nio.charset.Charset ; import java.nio.channels.FileChannel ; import com.google.gson.Gson ; import com.google.gson.JsonObject ; import com.google.gson.JsonArray ; import com.google.gson.JsonParser ; public class HelloWorld { public static void main ( String [] args ) { //obtain the filename from the passed arguments int payloadPos = - 1 ; for ( int i = 0 ; i < args . length ; i ++) { if ( args [ i ]. equals ( \"-payload\" )) { payloadPos = i + 1 ; break ; } } if ( payloadPos >= args . length ) { System . err . println ( \"Invalid payload argument.\" ); System . exit ( 1 ); } if ( payloadPos == - 1 ) { System . err . println ( \"No payload argument.\" ); System . exit ( 1 ); } //read the contents of the file to a string String payload = \"\" ; try { payload = readFile ( args [ payloadPos ]); } catch ( IOException e ) { System . err . println ( \"IOException\" ); System . exit ( 1 ); } //The string looks like this: // { \"arg1\": \"Test\", \"another_arg\": [\"apples\", \"oranges\"]} //parse the string as JSON Gson gson = new Gson (); JsonParser parser = new JsonParser (); JsonObject passed_args = parser . parse ( payload ). getAsJsonObject (); //print the output of the \"arg1\" property of the passed JSON object System . out . println ( \"arg1 = \" + gson . fromJson ( passed_args . get ( \"arg1\" ), String . class )); //the \"another_arg\" property is an array, so parse it as one String [] another_arg = gson . fromJson ( passed_args . get ( \"another_arg\" ), String []. class ); //print the first and second elements of the array System . out . println ( \"another_arg[0] = \" + another_arg [ 0 ]); System . out . println ( \"another_arg[1] = \" + another_arg [ 1 ]); } private static String readFile ( String path ) throws IOException { FileInputStream stream = new FileInputStream ( new File ( path )); try { FileChannel chan = stream . getChannel (); MappedByteBuffer buf = chan . map ( FileChannel . MapMode . READ_ONLY , 0 , chan . size ()); return Charset . defaultCharset (). decode ( buf ). toString (); } finally { stream . close (); } } } Recompile the jar File We're going to have to modify that manifest.txt file before we can use the GSON jar, though, so replace manifest.txt with the following: Main-Class: HelloWorld Class-Path: gson.jar Next we need to compile the Java file, but we need to insert the gson.jar file into the classpath on compile, so the compiler can find it. Use this new command: Command Line $ javac -cp \".:gson.jar\" HelloWorld.java If you're on Windows, that command looks a little different (Windows uses a different character to separate classpaths): Command Line $ javac -cp \".;gson.jar\" HelloWorld.java Now we need to generate another jar file: Command Line $ jar cfm hello.jar manifest.txt HelloWorld.class Update the .worker File and Reupload Finally, we need to modify the .worker file to include the gson.jar file in the code package it uploads. The new file is below: HelloWorld.worker # set the runtime language; this should be \"java\" for Java workers runtime \"java\" # exec is the file that will be executed when you queue a task exec \"hello.jar\" # replace with your jar file # file includes a file file \"path/to/gson.jar\" # replace with the path to your gson.jar file Upload that again by running the following command: Command Line $ iron_worker upload hello Your worker will start printing out the contents of the payload. "
}, {
"title": "Writing Workers in Node.js",
"url": "/worker/languages/nodejs/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Notice -- For the NPM Certificate Error. Please use the following in your \".worker\" file, instead of build \"npm install\" : build \"npm config set strict-ssl false; npm install --production\" Node.js is an evented language that brings the well-known Javascript language to server-side development, using Google's V8 runtime. The evented model of programming lends itself nicely to the asynchronous nature",
"body": " Notice -- For the NPM Certificate Error. Please use the following in your \".worker\" file, instead of build \"npm install\" : build \"npm config set strict-ssl false; npm install --production\" Node.js is an evented language that brings the well-known Javascript language to server-side development, using Google's V8 runtime. The evented model of programming lends itself nicely to the asynchronous nature of workers, making it a natural fit for IronWorker. This article will walk you through getting your Node.js workers running on IronWorker, but you should still be familiar with the basics of IronWorker . Table of Contents Quick Start Get the CLI Create Your Configuration File Write Your Node.js Worker Accessing the Params and Config Variables. Create a .worker File Upload Your Worker Queue Up Tasks for Your Worker Deep Dive Payload Example Exit Worker expicitly with an exit code Packaging Dependencies Quick Start Get the CLI We've created a command line interface to the IronWorker service that makes working with the service a lot easier and more convenient. It does, however, require you to have Ruby 1.9+ installed and to install the iron_worker_ng gem. Once Ruby 1.9+ is installed, you can just the following command to get the gem: Command Line $ gem install iron_worker_ng Create Your Configuration File The CLI needs a configuration file or environment variables set that tell it what your credentials are. We have some pretty good documentation about how this works, but for simplicity's sake, just save the following as iron.json in the same folder as your .worker file: iron.json { \"project_id\" : \"INSERT YOUR PROJECT ID HERE\" , \"token\" : \"INSERT YOUR TOKEN HERE\" } You should insert your project ID and token into that iron.json file. Then, assuming you're running the commands from within the folder, the CLI will pick up your credentials and use them automatically. Write Your Node.js Worker hello_worker.js console . log ( \"Hello World from Node.js.\" ); Accessing the Params and Config Variables. To access the contents of the configuration and payload variables from within your worker use the following helpers we've included in your environment. see source for these helpers here . hello_worker.js var worker = require ( 'node_helper' ); console . log ( \"params:\" , worker . params ); console . log ( \"config:\" , worker . config ); console . log ( \"task_id:\" , worker . task_id ); Create a .worker File Worker files are a simple way to define your worker and its dependencies. Save the following in a file called hello.worker : hello.worker # set the runtime language; this should be \"node\" for Node.js workers runtime \"node\" # exec is the file that will be executed when you queue a task exec \"hello_worker.js\" # replace with your file To change your worker's version, you may place stack \"node-0.10\" (e.x.) in your .worker file, for more see .worker syntax . Upload Your Worker Command Line $ iron_worker upload hello That command will read your .worker file, create your worker code package and upload it to IronWorker. Head over to hud.iron.io , click the Worker link on your projects list, then click the Tasks tab. You should see your new worker listed there with zero runs. Click on it to show the task list which will be empty, but not for long. Let’s quickly test it by running: iron_worker queue hello Now look at the task list in HUD and you should see your task show up and go from \"queued\" to \"running\" to \"completed\". Now that we know it works, let’s queue up a bunch of tasks from code. Note : Once you upload a code package, you can queue as many tasks as you'd like against it. You only need to re-upload the code package when your code changes. Queue Up Tasks for Your Worker Once your code has been uploaded, it's easy to queue a task to it. It's a single, authenticated POST request with a JSON object. The example below queues up a task for your NodeWorker. Just insert your project ID and token at the bottom (that third argument is the name of your worker). enqueue.js var https = require ( \"https\" ); function queue_task ( project , token , code_name ) { // Build the payload var payload = { \"arg1\" : \"Test\" , \"another_arg\" : [ \"apples\" , \"oranges\" ] }; var req_json = { \"tasks\" : [{ \"code_name\" : code_name , \"payload\" : JSON . stringify ( payload ) }] } // Convert the JSON data var req_data = JSON . stringify ( req_json ); // Create the request headers var headers = { 'Authorization' : 'OAuth ' + token , 'Content-Type' : \"application/json\" }; // Build config object for https.request var endpoint = { \"host\" : \"worker-us-east.iron.io\" , \"port\" : 443 , \"path\" : \"/2/projects/\" + project + \"/tasks\" , \"method\" : \"POST\" , \"headers\" : headers }; var post_req = https . request ( endpoint , function ( res ) { console . log ( \"statusCode: \" , res . statusCode ); res . on ( 'data' , function ( d ) { process . stdout . write ( d ); }); }); post_req . write ( req_data ) post_req . end (); post_req . on ( 'error' , function ( e ) { console . error ( e ); }); } queue_task ( \"INSERT PROJECT ID\" , \"INSERT TOKEN\" , \"NodeWorker\" ); Save this as \"enqueue.js\" and use node enqueue.js to queue up the task to your worker. You should get a response similar to this: statusCode : 200 { \"msg\" : \"Queued up\" , \"status_code\" : 200 , \"tasks\" : [{ \"id\" : \"4f9ecdd01bab47589b02a097\" }]} Note : Please make sure to check out our official node client library Deep Dive Payload Example Retrieving the payload in Node.js is the same as it is on any other language. Retrieve the -payload argument passed to the script, load that file, and parse it as JSON. We've included a useful helper module in node to assist in retrieving the payload and configuration variables in node. Simply require the helper module and call config, params, task_id. payload.js var worker = require ( 'node_helper' ); console . log ( \"params:\" , worker . params ); // you can also access the following console . log ( \"config:\" , worker . config ); console . log ( \"task_id:\" , worker . task_id ); Packaging Worker Dependencies using Node dependencies with Node require that you create a package.json file To generate a package.json the following more info: npm init npm-init when adding and installing modules run then following to automatically update your package.json manifest. npm install <module name> --save Ensuring your script exits with the right exit code It is important in some cases to declare a explicit exit code to give our systems a indication if your worker has completed sucessfully or failed. this also prevents instances where your worker may just hang or wait. In your worker: process . exit ( 1 ); process . exit ( 0 ); Local build requirements - package.json with included dependencies -/node_modules directory If you're using NPM modules within your worker, you're going to need to package those dependencies when you upload the worker. To do this, add dir \"node_modules\" and file \"package.json\" to your .worker file: hello.worker # set the runtime language; this should be \"node\" for Node.js workers runtime \"node\" # exec is the file that will be executed when you queue a task exec \"hello_worker.js\" # replace with your file dir \"node_modules\" # include dependency files when uploading file \"package.json\" # include dependency manifest when uploading Remote build requirements - package.json with included dependencies If you're using NPM modules within your worker, you're going to need to package those dependencies when you upload the worker. To do this, add a dir \"node_modules\" line and a file \"package.json\" line to your .worker file: hello.worker runtime \"node\" exec \"hello_worker.js\" # replace with your file file \"package.json\" # include dependency manifest when uploading build \"npm install\" # run npm install # build your dependencies remotely from package.json remote # you can use \"full_remote_build true\" or shorthand \"remote\" "
}, {
"title": "Writing Workers in PHP",
"url": "/worker/languages/php/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "PHP has grown to be one of the most popular languages to write web software in. You can add some power to your current PHP application using PHP workers on IronWorker. This article will help you get started with PHP workers, but you should be familiar with the basics of IronWorker . Table of Contents Quick Start Get the CLI",
"body": "PHP has grown to be one of the most popular languages to write web software in. You can add some power to your current PHP application using PHP workers on IronWorker. This article will help you get started with PHP workers, but you should be familiar with the basics of IronWorker . Table of Contents Quick Start Get the CLI Get the PHP Client Library Create Your Configuration File Write Your PHP Worker Create a .worker File Upload the Worker Queue Up Tasks for Your Worker Deep Dive Payload Example Environment Quick Start Get the CLI We've created a command line interface to the IronWorker service that makes working with the service a lot easier and more convenient. It does, however, require you to have Ruby 1.9+ installed and to install the iron_worker_ng gem. Once Ruby 1.9+ is installed, you can just the following command to get the gem: Command Line $ gem install iron_worker_ng Get the PHP Client Library You can download the PHP client library, iron_worker_php , from Github . If you're using PHP 5.3 or greater, you can just download the iron_worker.phar file. If you're using an earlier version of PHP, you need to download the IronWorker.class.php file and the IronCore.class.php file from here . If you aren't sure which version of PHP you're using, you can run php -v from your shell to find out. Create Your Configuration File The PHP library uses a configuration file or environment variables set that tell it what your credentials are. We have some pretty good documentation about how this works, but for simplicity's sake, just save the following as iron.json in the same folder as your .worker file: iron.json { \"project_id\" : \"INSERT YOUR PROJECT ID HERE\" , \"token\" : \"INSERT YOUR TOKEN HERE\" } You should insert your project ID and token into that iron.json file. Then, assuming you're running the commands from within the folder, the CLI will pick up your credentials and use them automatically. Write Your PHP Worker Save the following as hello_worker.php : hello_worker.php <?php echo \"Hello from PHP\" ; ?> Create a .worker File Worker files are a simple way to define your worker and its dependencies. Save the following in a file called hello.worker hello.worker # set the runtime language. PHP workers use \"php\" runtime \"php\" # exec is the file that will be executed: exec \"hello_worker.php\" You could include gems and other files in there too. You can read more about .worker files here . Upload the Worker Command Line $ iron_worker upload hello That command will read your .worker file, create your worker code package and upload it to IronWorker. Head over to hud.iron.io , click the Worker link on your projects list, then click the Tasks tab. You should see your new worker listed there with zero runs. Click on it to show the task list which will be empty, but not for long. Queue Up Tasks for Your Worker Save the following as enqueue.php : enqueue.php <?php require ( \"phar://iron_worker.phar\" ); /* If your PHP is less than 5.3, comment out the line above and uncomment the two following lines */ //require(\"IronWorker.class.php\"); //require(\"IronCore.class.php\"); $worker = new IronWorker (); $res = $worker -> postTask ( \"PHPWorker\" ); print_r ( $res ); ?> You can now queue up a task by calling php enqueue.php from your shell. Another way is to use CLI: Command Line $ iron_worker queue hello Now look at the task list in HUD and you should see your task show up and go from \"queued\" to \"running\" to \"completed\". Now that we know it works, let’s queue up a bunch of tasks from code. Note: Once you upload a code package, you can queue as many tasks as you'd like against it. You only need to re-upload the code package when your code changes. Deep Dive Payload Example Retrieving the payload in PHP is the same as it is on any other language. Retrieve the -payload argument passed to the script, load that file, and parse it as JSON. Fortunately, the iron_worker_php library includes a helper function with your worker that makes this easy. Just call getPayload(); to retrieve the payload. hello_worker.php <?php $payload = getPayload (); print_r ( $payload ); ?> Environment The PHP environment that the workers run in on IronWorker is as follows: PHP Version Version 5.3.6 Installed Modules php5-curl php5-mysql php5-gd mongo You can just use require_once('{MODULE_NAME}'); to use these modules in your workers. Note: While it is possible to use these modules without bundling them, we highly recommend that you include modules your code is reliant upon in the code package whenever possible. Most of these modules are included in the environment because they are binary modules, making it impossible to supply them at runtime. The ones that are not binary modules are some of the more popular modules, which we include to allow users to try things out and test things with minimal setup and pain. We cannot guarantee which version of the module will be available, and we may update them without warning. Reliance on these modules may cause some unexpected conflicts in your code. "
}, {
"title": "Writing Workers in Python",
"url": "/worker/languages/python/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Python has become one of the most popular languages for web software and scientific or mathematical computing. By offloading tasks to IronWorker, computations and requests can be run in parallel using the power of the cloud. This article will get you started writing Python workers, but you should be familiar with the basics of IronWorker . Table of Contents Quick",
"body": "Python has become one of the most popular languages for web software and scientific or mathematical computing. By offloading tasks to IronWorker, computations and requests can be run in parallel using the power of the cloud. This article will get you started writing Python workers, but you should be familiar with the basics of IronWorker . Table of Contents Quick Start Get the CLI Get the Python Client Library Create Your Configuration File Write Your Python Worker Create a .worker File Upload the Worker Queue Up Tasks for Your Worker Deep Dive Payload Example Exit Worker expicitly with an exit code Environment Quick Start Get the CLI We've created a command line interface to the IronWorker service that makes working with the service a lot easier and more convenient. It does, however, require you to have Ruby 1.9+ installed and to install the iron_worker_ng gem. Once Ruby 1.9+ is installed, you can just the following command to get the gem: Command Line $ gem install iron_worker_ng Get the Python Client Library You can download the Python client library, iron_worker_python , from Github —note that you'll need the iron core python library installed, too. Users of pip or easy_install can simply use pip install iron-worker and easy_install iron-worker . Create Your Configuration File The Python library uses a configuration file or environment variables set that tell it what your credentials are. We have some pretty good documentation about how this works, but for simplicity's sake, just save the following as iron.json in the root of your project: iron.json { \"project_id\" : \"INSERT YOUR PROJECT ID HERE\" , \"token\" : \"INSERT YOUR TOKEN HERE\" } You should insert your project ID and token into that iron.json file. Then, assuming you're running the commands from within the folder, the library will pick up your credentials and use them automatically. Write Your Python Worker hello_worker.py print \"Hello from Python\" Create a .worker File Worker files are a simple way to define your worker and its dependencies. Save the following in a file called hello.worker hello.worker # set the runtime language. Python workers use \"python\" runtime \"python\" # exec is the file that will be executed: exec \"hello_worker.py\" You could include gems and other files in there too. You can read more about .worker files here . Upload the Worker Command Line $ iron_worker upload hello That command will read your .worker file, create your worker code package and upload it to IronWorker. Head over to hud.iron.io , click the Worker link on your projects list, then click the Tasks tab. You should see your new worker listed there with zero runs. Click on it to show the task list which will be empty, but not for long. Queue Up Tasks for Your Worker enqueue.py from iron_worker import * worker = IronWorker () response = worker . queue ( code_name = \"hello\" ) You can now queue up a task by calling python enqueue.py from your shell. Another way is to use CLI: Command Line $ iron_worker queue hello Now look at the task list in HUD and you should see your task show up and go from \"queued\" to \"running\" to \"completed\". Now that we know it works, let’s queue up a bunch of tasks from code. Note: Once you upload a code package, you can queue as many tasks as you'd like against it. You only need to re-upload the code package when your code changes. Deep Dive Payload Example Retrieving the payload in Python is the same as it is on any other language. Retrieve the -payload argument passed to the script, load that file, and parse it as JSON. In your worker: import sys , json payload_file = None payload = None for i in range ( len ( sys . argv )): if sys . argv [ i ] == \"-payload\" and ( i + 1 ) < len ( sys . argv ): payload_file = sys . argv [ i + 1 ] with open ( payload_file , 'r' ) as f : payload = json . loads ( f . read ()) break Ensuring your script exits with the right exit code It is important in some cases to declare a explicit exit code to give our systems a indication if your worker has completed sucessfully or failed. this also prevents instances where your worker may just hang or wait. In your worker: Python : exit ( 1 ) sys . exit ( 1 ) Environment The Python environment that the workers run in on IronWorker is as follows: Python Version Version 2.7.2 Installed Modules python-lxml numpy scipy pymongo gevent PIL You can just use import {MODULE_NAME} to use these modules in your workers. Note: While it is possible to use these modules without bundling them, we highly recommend that you include modules your code is reliant upon in the code package whenever possible. Most of these modules are included in the environment because they are binary modules, making it impossible to supply them at runtime. The ones that are not binary modules are some of the more popular modules, which we include to allow users to try things out and test things with minimal setup and pain. We cannot guarantee which version of the module will be available, and we may update them without warning. Reliance on these modules may cause some unexpected conflicts in your code. "
}, {
"title": "Writing Workers in Ruby",
"url": "/worker/languages/ruby/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Ruby was the first language supported on IronWorker, and a lot of IronWorker's tools are written in Ruby. It is probably the easiest language to get your worker running in, as it is the most-supported language on the platform. This article will walk you through the specifics of things, but you should be familiar with the basics of IronWorker .",
"body": "Ruby was the first language supported on IronWorker, and a lot of IronWorker's tools are written in Ruby. It is probably the easiest language to get your worker running in, as it is the most-supported language on the platform. This article will walk you through the specifics of things, but you should be familiar with the basics of IronWorker . Table of Contents Quick Start Get the iron worker ng Ruby Gem Create Your Configuration File Write Your Ruby Worker Create a .worker File Upload Your Worker Queue Up Tasks for Your Worker Deep Dive A Note on Libraries Payload Example Merging Ruby on Rails Quick Start Get the iron_worker_ng Ruby Gem We recommend new users use the iron worker ng gem for Ruby workers, which makes packaging code libraries and other dependencies much easier. It also contains CLI . Older customers may be using the iron_worker gem. We recommend switching off that at your earliest convenience. If you are running Ruby 1.8, you'll need to install json gem additionally. Note that we are providing Ruby 1.9/Ruby 2.1 and you could select proper version using 'stack' keyword in your .worker file. You can install the iron_worker_ng gem from the command line: Command Line $ gem install iron_worker_ng Create Your Configuration File The CLI needs a configuration file or environment variables set that tell it what your credentials are. We have some pretty good documentation about how this works, but for simplicity's sake, just save the following as iron.json in the same folder as your .worker file: iron.json { \"project_id\" : \"INSERT YOUR PROJECT ID HERE\" , \"token\" : \"INSERT YOUR TOKEN HERE\" } You should insert your project ID and token into that iron.json file. Then, assuming you're running the commands from within the folder, the CLI will pick up your credentials and use them automatically. Write Your Ruby Worker hello_worker.rb # Worker code can be anything you want. puts \"Starting HelloWorker at #{ Time . now } \" puts \"Payload: #{ params } \" puts \"Simulating hard work for 5 seconds...\" 5 . times do | i | puts \"Sleep #{ i } ...\" sleep 1 end puts \"HelloWorker completed at #{ Time . now } \" Create a .worker File Worker files are a simple way to define your worker and its dependencies. Save the following in a file called hello.worker hello.worker # set the runtime language. Ruby workers use \"ruby\" runtime \"ruby\" # exec is the file that will be executed: exec \"hello_worker.rb\" You could include gems and other files in there too. You can read more about .worker files here . Upload Your Worker Command Line $ iron_worker upload hello That command will read your .worker file, create your worker code package and upload it to IronWorker. Head over to hud.iron.io , click the Worker link on your projects list, then click the Tasks tab. You should see your new worker listed there with zero runs. Click on it to show the task list which will be empty, but not for long. Let’s quickly test it by running: Command Line $ iron_worker queue hello Now look at the task list in HUD and you should see your task show up and go from \"queued\" to \"running\" to \"completed\". Now that we know it works, let’s queue up a bunch of tasks from code. Note: Once you upload a code package, you can queue as many tasks as you'd like against it. You only need to re-upload the code package when your code changes. Queue Up Tasks for Your Worker Now you can queue up as many tasks as you want, whenever you want, from whatever language you want. You will want to look at the docs for the client library for your language for how to queue or create a task. The following is an example in ruby, save the following into a file called enqueue.rb : enqueue.rb require 'iron_worker_ng' client = IronWorkerNG :: Client . new 100 . times do client . tasks . create ( \"hello\" , \"foo\" => \"bar\" ) end You can run that code with: Command Line $ ruby enqueue.rb Deep Dive A Note on Libraries We currently offer both the iron_worker and iron worker ng gems as officially supported client libraries. The iron_worker gem is deprecated and will no longer be under active development; the iron_worker_ng gem is actively maintained and is considered to be the gold standard gem. We suggest that new users use the iron_worker_ng gem and that users who are currently using the iron_worker gem slowly and carefully transition over when they get the opportunity. Payload Example Retrieving the payload in Ruby workers is a bit different—some of the clients take care of the dirty work for you. So while it's still the same process—get the -payload argument passed to the script at runtime, read the file it specifies, and parse the JSON contained within that file— the official client library takes care of that for you and lets you just access the payload as a variable at runtime. Here's an example: In the task queuing script: enqueue.rb require 'iron_worker_ng' client = IronWorkerNG :: Client . new task_id = client . tasks . create ( 'Worker Name Here' , { :arg1 => \"Test\" , :another_arg => [ \"apples\" , \"oranges\" ] }) In the worker: hello_worker.rb puts params [ 'arg1' ] puts params [ 'another_arg' ]. inspect Please note that for non-JSON arguments, you should use the payload variable instead of the params variable. The payload variable is simply the raw contents of the file specified by -payload , without any JSON parsing being applied. hello_worker.rb puts payload Merging Because your Ruby workers run in a Ruby environment in the cloud, you need to upload all your gems and other dependencies with your workers. Fortunately, the official client library has a built-in solution for this, called \"merging\". Gems You can find out how to merge gems and more about best practices on the Merging Gems page . Files and Directories It's often the case that a worker needs files besides the script that contains its functionality. You may need configuration files, other scripts, or other static resources. Both official client libraries have made it easy to include these auxiliary files. You can find out more about merging files and directories on the Merging Files & Directories page . Ruby on Rails It is possible to upload, queue, and manage your workers from within a Rails application, but it's important to note that IronWorker does not auto-include your models, libraries, and other Rails stack pieces. Your workers should be independent, discrete parts of an application, a mini-application in themselves, so framework usage in workers, in general, is frowned upon. Check out this blog post for step-by-step instructions on including and using the Rails stack including some models, ActionMailers, etc. "
}, {
"title": "Merging Files & Directories",
"url": "/worker/languages/ruby/merging-files-and-dirs/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Workers sometimes need access to resource files to be able to do their jobs. Whether these files are templates, configuration files, or data dumps, it's simple to upload them with the official client libraries. To upload a resource file through iron_worker_ng , just use the file command in your .worker file: .worker file '../config/database.yml' # will be in the same",
"body": "Workers sometimes need access to resource files to be able to do their jobs. Whether these files are templates, configuration files, or data dumps, it's simple to upload them with the official client libraries. To upload a resource file through iron_worker_ng , just use the file command in your .worker file: .worker file '../config/database.yml' # will be in the same directory as the worker file 'clients.csv' , 'information/clients' # will be in the information/clients subdirectory file takes two arguments. The first is the path to the file, the second is the optional destination. If the destination is omitted, the file will be stored in the same directory as the worker, otherwise the file will be stored as a file in the subdirectory specified by destination . If you want to merge many files, however, there's also the option to use the built-in dir command in your .worker file: .worker dir '../config' # will be in the same directory as the worker dir 'lib' , 'utils' # will be in the utils subdirectory, accessible as utils/lib Again, the two arguments are simply the path and the destination. dir treats them exactly as file does. For more information, see the iron worker ng README . "
}, {
"title": "Merging Gems",
"url": "/worker/languages/ruby/merging-gems/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Your workers can take advantage of the wealth of libraries that the Ruby community has produced, but it takes a little bit of setup. Merging gems is simple in iron_worker_ng ; you just use the gem command in your .worker file: .worker gem 'activerecord' gem 'paperclip' , '< 3.0.0,>=2.1.0' The first parameter is the gem name, the second is an",
"body": "Your workers can take advantage of the wealth of libraries that the Ruby community has produced, but it takes a little bit of setup. Merging gems is simple in iron_worker_ng ; you just use the gem command in your .worker file: .worker gem 'activerecord' gem 'paperclip' , '< 3.0.0,>=2.1.0' The first parameter is the gem name, the second is an optional string of version constraints. See the .worker file reference for more information. Note: Gems with binary extensions will not be merged by default. If you have such gems use remote build . You can also use the gemfile command to merge gems from a Gemfile into your worker: .worker gemfile '../Gemfile' , 'common' , 'worker' # merges gems from common and worker groups "
}, {
"title": "IronWorker Client Libraries",
"url": "/worker/libraries/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Official Client Libraries These are our official client libraries that use the IronWorker REST/HTTP API . Ruby PHP Python Java Node.JS Go .NET Community Supported Client Libraries These are some unofficial client libraries that use the IronWorker REST/HTTP API . Node.JS - node-ironio by Andrew Hallock .NET - IronTools by Oscar Deits .NET - IronSharp by Jeremy Bell PHP -",
"body": "Official Client Libraries These are our official client libraries that use the IronWorker REST/HTTP API . Ruby PHP Python Java Node.JS Go .NET Community Supported Client Libraries These are some unofficial client libraries that use the IronWorker REST/HTTP API . Node.JS - node-ironio by Andrew Hallock .NET - IronTools by Oscar Deits .NET - IronSharp by Jeremy Bell PHP - Codeigniter-Iron.io by jrutheiser PHP - ironio-oauth by dnovikov Perl - IO::Iron by Mikko Koivunalho We will continue to add more clients for the REST/HTTP API. If you would like to see one in particular, please let us know. We're also totally supportive if you want to build or modify client libraries yourself. Feel free to jump into our live chat support for help. We love community involvement! "
}, {
"title": "IronWorker REST HTTP API",
"url": "/worker/reference/api/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "IronWorker provides a RESTful HTTP API to allow you to interact programmatically with our service and your workers. Endpoints Code Packages URL HTTP Verb Purpose /projects/ {Project ID} /codes GET List Code Packages /projects/ {Project ID} /codes POST Upload or Update a Code Package /projects/ {Project ID} /codes/ {Code ID} GET Get Info About A Code Package /projects/ {Project ID}",
"body": "IronWorker provides a RESTful HTTP API to allow you to interact programmatically with our service and your workers. Endpoints Code Packages URL HTTP Verb Purpose /projects/ {Project ID} /codes GET List Code Packages /projects/ {Project ID} /codes POST Upload or Update a Code Package /projects/ {Project ID} /codes/ {Code ID} GET Get Info About A Code Package /projects/ {Project ID} /codes/ {Code ID} DELETE Delete a Code Package /projects/ {Project ID} /codes/ {Code ID} /download GET Download a Code Package /projects/ {Project ID} /codes/ {Code ID} /revisions GET List Code Package Revisions Tasks URL HTTP Verb Purpose /projects/ {Project ID} /tasks GET List Tasks /projects/ {Project ID} /tasks POST Queue a Task /projects/ {Project ID} /tasks/webhook POST Queue a Task from a Webhook /projects/ {Project ID} /tasks/ {Task ID} GET Get Info About a Task /projects/ {Project ID} /tasks/ {Task ID} /log GET Get a Task's Log /projects/ {Project ID} /tasks/ {Task ID} /cancel POST Cancel a Task /projects/ {Project ID} /tasks/ {Task ID} /progress POST Set a Task's Progress /projects/ {Project ID} /tasks/ {Task ID} /retry POST Retry a Task Scheduled Tasks URL HTTP Verb Purpose /projects/ {Project ID} /schedules GET List Scheduled Tasks /projects/ {Project ID} /schedules POST Schedule a Task /projects/ {Project ID} /schedules/ {Schedule ID} GET Get Info About a Scheduled Task /projects/ {Project ID} /schedules/ {Schedule ID} /cancel POST Cancel a Scheduled Task Stacks URL HTTP Verb Purpose /stacks GET List of available stacks Authentication IronWorker uses OAuth2 tokens to authenticate API requests. You can find and create your API tokens in the HUD . To authenticate your request, you should include a token in the Authorization header for your request or in your query parameters. Tokens are universal, and can be used across services. Note that each request also requires a Project ID to specify which project the action will be performed on. You can find your Project IDs in the HUD . Project IDs are also universal, so they can be used across services as well. Example Authorization Header : Authorization: OAuth abc4c7c627376858 Note : Be sure you have the correct case: it's OAuth , not Oauth. Example Query with Parameters : GET https:// worker-aws-us-east-1 .iron.io/2/projects/ {Project ID} /tasks?oauth=abc4c7c627376858 Requests Requests to the API are simple HTTP requests against the API endpoints. All request bodies should be in JSON format. Unless otherwise noted, all requests should use the following headers (in addition to their authentication): - Accept : application/json - Accept-Encoding : gzip/deflate - Content-Type : application/json Base URL All endpoints should be prefixed with the following: https:// {Host} .iron.io/ {API Version} / API Version Support : IronWorker API supports version 2 The domains for the clouds Iron Worker supports are as follows: Cloud {Host} AWS worker-aws-us-east-1 Pagination For endpoints that return lists/arrays of values: page - The page of results to return. Default is 0. Maximum is 100. per_page - The number of results to return. It may be less if there aren't enough results. Default is 30. Maximum is 100. Responses All responses are in JSON with a Content-Type of application/json . Your requests should all contain an Accept: application/json header to accommodate the responses. Status Codes The success failure for request is indicated by an HTTP status code. A 2xx status code indicates success, whereas a 4xx status code indicates an error. Code Status 200 Success 401 Invalid authentication: The OAuth token is either not provided or invalid. 403 Project suspected, resource limits. 404 Invalid endpoint: The resource, project, or endpoint being requested doesn’t exist. 405 Invalid HTTP method: A GET, POST, DELETE, or PUT was sent to an endpoint that doesn’t support that particular verb. 406 Invalid request: Required fields are missing. Errors In the event of an error, the appropriate status code will be returned with a body containing more information. An error response is structured as follows: { \"msg\" : \"reason for error\" } Exponential Backoff When a 503 error code is returned, it signifies that the server is currently unavailable. This means there was a problem processing the request on the server-side; it makes no comment on the validity of the request. Libraries and clients should use exponential backoff when confronted with a 503 error, retrying their request with increasing delays until it succeeds or a maximum number of retries (configured by the client) has been reached. Dates and Times All dates, times, and timestamps will use the ISO 8601 / RFC 3339 format. Code Packages Your workers are run against code packages that can be updated and deleted over time. The code packages define the functionality a worker has through the code they contain. Put simply, code packages are simply the code that will run when your worker runs. List Code Packages Endpoint GET /projects/ {Project ID} /codes URL Parameters Project ID : The ID of the project whose code packages you want to get a list of. Optional Query Parameters page : The page of code packages you want to retrieve, starting from 0. Default is 0, maximum is 100. per_page : The number of code packages to return per page. Note this is a maximum value, so there may be fewer packages returned if there aren’t enough results. Default is 30, maximum is 100. Response The response will be a JSON object. The \"codes\" property will contain a JSON array of objects, each representing a code package. Sample: { \"codes\" : [ { \"id\" : \"4ea9c05dcddb131f1a000002\" , \"project_id\" : \"4ea9c05dcddb131f1a000001\" , \"name\" : \"MyWorker\" , \"runtime\" : \"ruby\" , \"latest_checksum\" : \"b4781a30fc3bd54e16b55d283588055a\" , \"rev\" : 1 , \"latest_history_id\" : \"4f32ecb4f840063758022153\" , \"latest_change\" : 1328737460598000000 } ] } Upload or Update a Code Package You will almost always want to use our Command Line Interface to make uploading easier. Building a Code Package If your client doesn't support uploading code packages and you don't want to use the CLI , you're going to need to build the code package yourself before uploading it. Code should be submitted as a zip file containing all of the files your project needs. That includes dependencies, libraries, data files... everything. Endpoint POST /projects/ {Project ID} /codes URL Parameters Project ID : The ID of the project that you are uploading the code to. Request The request should be JSON-encoded and contain the following information: name : A unique name for your worker. This will be used to assign tasks to the worker as well as to update the code. If a worker with this name already exists, the code you are uploading will be added as a new revision. When uploading code, the following are required (not required if just updating code options below): file : A multipart-encoded string containing the zip file you are uploading. file_name : The name of the file within the zip that will be executed when a task is run. runtime : The language to execute your worker with. The following values are valid: sh ruby python php The request also accepts the following optional parameters: config : An arbitrary string (usually YAML or JSON) that, if provided, will be available in a file that your worker can access. The config file location will be passed in via the -config argument to your worker. The config cannot be larger than 64KB in size. stack : A string that, if provided, will set the specific language environment. If blank the language version will be set to default language version defined in runtime. See More Information on Stack settings . max_concurrency : The maximum number of workers that should be run in parallel. This is useful for keeping your workers from hitting API quotas or overloading databases that are not prepared to handle the highly-concurrent worker environment. If omitted, there will be no limit on the number of concurrent workers. retries : The maximum number of times failed tasks should be retried, in the event that there's an error while running them. If omitted, tasks will not be retried. Tasks cannot be retried more than ten times. retries_delay : The number of seconds to wait before retries. If omitted, tasks will be immediately retried. default_priority : The default priority of the tasks running this code. Valid values are 0, 1, and 2. The priority of the task can be set when queueing the task. If it's not set when queueing the task, the default priority is used. Your request also needs the following headers, in addition to the headers required by all API calls: Content-Length : The number of bytes in your JSON-encoded request body Content-Type : Should be set to \"multipart/form-data ; boundary={Insert Value Here}\" with boundary set to an appropriate value . Note : This request is not limited to 64 KB, unlike other requests. Sample Headers : Content-Length: 3119 Content-Type: multipart/form-data; boundary=39f5903459794ad483153244cc6486ec Sample Body : --39f5903459794ad483153244cc6486ec Content-Disposition: form-data; name=\"data\" Content-Type: text/plain; charset=utf-8 { \"file_name\" : \"MyWorker.rb\" , \"name\" : \"MyWorker\" , \"runtime\" : \"ruby\" , \"max_concurrency\" : 12 } --39f5903459794ad483153244cc6486ec Content-Disposition: form-data; name=\"file\"; filename=\"MyWorker.zip\" Content-Type: application/zip { Form-encoded zip data goes here } --39f5903459794ad483153244cc6486ec-- Response The response will be a JSON object containing a \"msg\" property that contains a description of the response. Sample: { \"msg\" : \"Upload successful.\" } Get Info About a Code Package Endpoint GET /projects/ {Project ID} /codes/ {Code ID} URL Parameters Project ID : The ID of the project that the code package belongs to. Code ID : The ID of the code package you want details on. Response The response will be a JSON object containing the details of the code package. Sample: { \"id\" : \"4eb1b241cddb13606500000b\" , \"project_id\" : \"4eb1b240cddb13606500000a\" , \"name\" : \"MyWorker\" , \"runtime\" : \"ruby\" , \"latest_checksum\" : \"a0702e9e9a84b758850d19ddd997cf4a\" , \"rev\" : 1 , \"latest_history_id\" : \"4eb1b241cddb13606500000c\" , \"latest_change\" : 1328737460598000000 } Delete a Code Package Endpoint DELETE /projects/ {Project ID} /codes/ {Code ID} URL Parameters Project ID : The ID of the project that the code package belongs to. Code ID : The ID of the code package you want to delete. Response The response will be a JSON object containing a message property explaining whether the request was successful or not. Sample: { \"msg\" : \"Deleted\" } Download a Code Package Endpoint GET /projects/ {Project ID} /codes/ {Code ID} /download URL Parameters Project ID : The ID of the project that the task belongs to. Code Package ID : The ID of the task you want details on. Optional Query Parameters revision : The revision of the code package you want to download. If not specified, the latest revision will be downloaded. Response The response will be a zip file containing your code package. The response header will include a Content-Disposition header containing \"filename=yourworker_rev.zip\", where yourworker is the code package’s name and rev is the numeric revision. The response’s Content-Type will be \"application/zip\". List Code Package Revisions Endpoint GET /projects/ {Project ID} /codes/ {Code ID} /revisions URL Parameters Project ID : The ID of the project that the task belongs to. Code Package ID : The ID of the code package whose revisions you’re retrieving. Optional Query Parameters page : The page of revisions you want to retrieve, starting from 0. Default is 0, maximum is 100. per_page : The number of revisions to return per page. Note this is a maximum value, so there may be less revisions returned if there aren’t enough results. Default is 30, maximum is 100. Response The response will be a JSON object with a revisions property, containing a list of JSON objects, each representing a revision to the code package. Sample: { \"revisions\" : [ { \"id\" : \"4f32d9c81cf75447be020ea6\" , \"code_id\" : \"4f32d9c81cf75447be020ea5\" , \"project_id\" : \"4f32d521519cb67829000390\" , \"rev\" : 1 , \"runtime\" : \"ruby\" , \"name\" : \"MyWorker\" , \"file_name\" : \"worker.rb\" , }, { \"id\" : \"4f32da021cf75447be020ea8\" , \"code_id\" : \"4f32d9c81cf75447be020ea5\" , \"project_id\" : \"4f32d521519cb67829000390\" , \"rev\" : 2 , \"runtime\" : \"ruby\" , \"name\" : \"MyWorker\" , \"file_name\" : \"worker.rb\" , } ] } Tasks Tasks are specific instance of your workers being run. They encompass a single execution of a code package. Tasks consist of the code package to be run and the data to pass to the code package. Task Properties Task State Tasks will be in different states during the course of operation. Here are the states that tasks can be in in the system: Task State Status queued in the queue, waiting to run running running complete finished running error error during processing cancelled cancelled by user killed killed by system timeout exceeded processing time threshold Priority Task priority determines how much time a task may sit in queue. Higher values means tasks spend less time in the queue once they come off the schedule. Access to priorities depends on your selected IronWorker plan see plans . You must have access to higher priority levels in your chosen plan or your priority will automatically default back to 0. The standard/default priority is 0. Priority 0 Default 1 Medium 2 High (less time in queue) Timeout Tasks have timeouts associated with them that specify the amount of time (in seconds) the process may run. The maximum timeout is 3600 seconds (60 minutes). It’s also the default timeout but it can be set on a task-by-task basis to be anytime less than 3600 seconds. Timeout (in seconds) 3600 Maximum time a task can run (also default) Runtime Languages ruby python php List Tasks Endpoint GET /projects/ {Project ID} /tasks?code_name={CODE NAME} URL Parameters Project ID : The ID of the project whose tasks you want to get a list of. Required Query Parameters code_name : The name of your worker (code package). Optional Query Parameters page : The page of tasks you want to retrieve, starting from 0. Default is 0, maximum is 100. per_page : The number of tasks to return per page. Note this is a maximum value, so there may be less tasks returned if there aren’t enough results. Default is 30, maximum is 100. Filter by Status: the parameters queued , running , complete , error , cancelled , killed , and timeout will all filter by their respective status when given a value of 1 . These parameters can be mixed and matched to return tasks that fall into any of the status filters. If no filters are provided, tasks will be displayed across all statuses. from_time : Limit the retrieved tasks to only those that were created after the time specified in the value. Time should be formatted as the number of seconds since the Unix epoch. to_time : Limit the retrieved tasks to only those that were created before the time specified in the value. Time should be formatted as the number of seconds since the Unix epoch. Response The response will be a JSON object. The \"tasks\" property will contain a JSON array of objects, each representing a task. Sample: { \"tasks\" : [ { \"id\" : \"4f3595381cf75447be029da5\" , \"created_at\" : \"2012-02-10T22:07:52.712Z\" , \"updated_at\" : \"2012-02-10T22:11:55Z\" , \"project_id\" : \"4f32d521519cb67829000390\" , \"code_id\" : \"4f32d9c81cf75447be020ea5\" , \"status\" : \"complete\" , \"msg\" : \"SetProgress output\" , \"code_name\" : \"MyWorker\" , \"start_time\" : \"2012-02-10T22:07:54Z\" , \"end_time\" : \"2012-02-10T22:11:55Z\" , \"duration\" : 241441 , \"run_times\" : 1 , \"timeout\" : 3600 , \"percent\" : 100 } ] } Queue a Task Endpoint POST /projects/ {Project ID} /tasks URL Parameters Project ID : The ID of the project that you are creating the task in. Request The request should be JSON-encoded and consist of an object with a single property, \"tasks\", which contains an array of objects. Each object in the array should consist of: code_name : The name of the code package to execute for this task. payload : A string of data to be passed to the worker (usually JSON) so the worker knows exactly what worker it should perform. This is the equivalent to a message in a typical message queue. The payload will be available in a file that your worker can access. File location will be passed in via the -payload argument. The payload cannot be larger than 64KB in size. Optionally, each object in the array can also contain the following: priority : The priority queue to run the task in. Valid values are 0, 1, and 2. Task priority determines how much time a task may sit in queue. Higher values means tasks spend less time in the queue once they come off the schedule. Access to priorities depends on your selected IronWorker plan see plans . You must have access to higher priority levels in your chosen plan or your priority will automatically default back to 0. The standard/default priority is 0. cluster : cluster name ex: \"high-mem\" or \"dedicated\". This is a premium feature for customers to have access to more powerful or custom built worker solutions. Dedicated worker clusters exist for users who want to reserve a set number of workers just for their queued tasks. If not set default is set to \"default\" which is the public IronWorker cluster. timeout : The maximum runtime of your task in seconds. No task can exceed 3600 seconds (60 minutes). The default is 3600 but can be set to a shorter duration. delay : The number of seconds to delay before actually queuing the task. Default is 0. The request also needs to be sent with a \"Content-Type: application/json\" header, or it will respond with a 406 status code and a \"msg\" property explaining the missing header. Sample: { \"tasks\" : [ { \"code_name\" : \"MyWorker\" , \"payload\" : \"{\\\"x\\\": \\\"abc\\\", \\\"y\\\": \\\"def\\\"}\" } ] } Response The response will be a JSON object containing a \"msg\" property that contains a description of the response and a \"tasks\" property that contains an array of objects, each with an \"id\" property that contains the created task’s ID. Sample: { \"msg\" : \"Queued up\" , \"tasks\" : [ { \"id\" : \"4eb1b471cddb136065000010\" } ] } Queue a Task From a Webhook Endpoint POST /projects/ {Project ID} /tasks/webhook?code_name= {Code Name} URL Parameters Project ID : The ID of the project that you are uploading the code to. Code Name : The name of the code package you want to execute the task. Optionally, following URL parameters can be sent: priority : The priority queue to run the task in. Valid values are 0, 1, and 2. Task priority determines how much time a task may sit in queue. Higher values means tasks spend less time in the queue once they come off the schedule. Access to priorities depends on your selected IronWorker plan see plans . You must have access to higher priority levels in your chosen plan or your priority will automatically default back to 0. The standard/default priority is 0. cluster : cluster name ex: \"high-mem\" or \"dedicated\". This is a premium feature for customers to have access to more powerful or custom built worker solutions. Dedicated worker clusters exist for users who want to reserve a set number of workers just for their queued tasks. If not set default is set to \"default\" which is the public IronWorker cluster. timeout : The maximum runtime of your task in seconds. No task can exceed 3600 seconds (60 minutes). The default is 3600 but can be set to a shorter duration. delay : The number of seconds to delay before actually queuing the task. Default is 0. Sample endpoint with all optional parameters set: POST /projects/ {Project ID} /tasks/webhook?code_name= {Code Name} &priority= {priority} &delay= {delay} &cluster= {cluster} &timeout= {timeout} Request The request body is free-form: anything at all can be sent. Whatever the request body is will be passed along as the payload for the task, and therefore needs to be under 64KB in size. Response The response will be a JSON object containing a \"msg\" property that contains a description of the response. Sample: { \"id\" : \"4f3595381cf75447be029da5\" , \"msg\" : \"Queued up.\" } Get Info About a Task Endpoint GET /projects/ {Project ID} /tasks/ {Task ID} URL Parameters Project ID : The ID of the project that the task belongs to. Task ID : The ID of the task you want details on. Response The response will be a JSON object containing the details of the task. Sample: { \"id\" : \"4eb1b471cddb136065000010\" , \"project_id\" : \"4eb1b46fcddb13606500000d\" , \"code_id\" : \"4eb1b46fcddb13606500000e\" , \"code_history_id\" : \"4eb1b46fcddb13606500000f\" , \"status\" : \"complete\" , \"code_name\" : \"MyWorker\" , \"code_rev\" : \"1\" , \"start_time\" : 1320268924000000000 , \"end_time\" : 1320268924000000000 , \"duration\" : 43 , \"timeout\" : 3600 , \"payload\" : \"{\\\"foo\\\":\\\"bar\\\"}\" , \"updated_at\" : \"2012-11-10T18:31:08.064Z\" , \"created_at\" : \"2012-11-10T18:30:43.089Z\" } Get a Task’s Log Endpoint GET /projects/ {Project ID} /tasks/ {Task ID} /log URL Parameters Project ID : The ID of the project that the task belongs to. Task ID : The ID of the task whose log you are retrieving. Response Unlike the other API methods, this method will return a Content-Type of \"text/plain\". The response will only include the task’s log. Sample: Hello World! Cancel a Task Endpoint POST /projects/ {Project ID} /tasks/ {Task ID} /cancel URL Parameters Project ID : The ID of the project that the task belongs to. Task ID : The ID of the task you want to cancel. Response The response will be a JSON object containing a message explaining whether the request was successful or not. Sample: { \"msg\" : \"Cancelled\" } Set a Task’s Progress Endpoint POST /projects/ {Project ID} /tasks/ {Task ID} /progress URL Parameters Project ID : The ID of the project that contains the task. Task ID : The ID of the task whose progress you are updating. Request The request should be JSON-encoded and can contain the following information: percent : An integer, between 0 and 100 inclusive, that describes the completion of the task. msg : Any message or data describing the completion of the task. Must be a string value, and the 64KB request limit applies. The request also needs to be sent with a \"Content-Type: application/json\" header, or it will respond with a 406 status code and a \"msg\" property explaining the missing header. Sample: { \"percent\" : 25 , \"msg\" : \"Any message goes here.\" } Response The response will be a JSON object containing a message explaining whether the request was successful or not. Sample: { \"msg\" : \"Progress set\" } Retry a Task Endpoint POST /projects/ {Project ID} /tasks/ {Task ID} /retry URL Parameters Project ID : The ID of the project that the task belongs to. Task ID : The ID of the task you want to cancel. Request The request must be JSON-encoded and can contain the following information: delay : The number of seconds the task should be delayed before it runs again. The request also needs to be sent with a \"Content-Type: application/json\" header, or it will respond with a 406 status code and a \"msg\" property explaining the missing header. Response The response will be a JSON object containing a message explaining whether the request was successful or not. Sample: { \"msg\" : \"Queued up\" , \"tasks\" : [ { \"id\" : \"4eb1b471cddb136065000010\" } ] } Scheduled Tasks Scheduled tasks are just tasks that run on a schedule. While the concept is simple, it enables a powerful class of functionality: tasks can be used as cron workers, running at specific intervals a set (or unlimited) number of times. List Scheduled Tasks Endpoint GET /projects/ {Project ID} /schedules URL Parameters Project ID : The ID of the project whose scheduled tasks you want to get a list of. Optional Query Parameters page : The page of scheduled tasks you want to retrieve, starting from 0. Default is 0, maximum is 100. per_page : The number of scheduled tasks to return per page. Note this is a maximum value, so there may be less tasks returned if there aren’t enough results. Default is 30, maximum is 100. Response The response will be a JSON object. The \"schedules\" property will contain a JSON array of objects, each representing a schedule. Sample: { \"schedules\" : [ { \"id\" : \"4eb1b490cddb136065000011\" , \"created_at\" : \"2012-02-14T03:06:41Z\" , \"updated_at\" : \"2012-02-14T03:06:41Z\" , \"project_id\" : \"4eb1b46fcddb13606500000d\" , \"msg\" : \"Ran max times.\" , \"status\" : \"complete\" , \"code_name\" : \"MyWorker\" , \"start_at\" : \"2011-11-02T21:22:34Z\" , \"end_at\" : \"2262-04-11T23:47:16Z\" , \"next_start\" : \"2011-11-02T21:22:34Z\" , \"last_run_time\" : \"2011-11-02T21:22:51Z\" , \"run_times\" : 1 , \"run_count\" : 1 , \"cluster\" : \"high-memory\" } ] } Schedule a Task Endpoint POST /projects/ {Project ID} /schedules URL Parameters Project ID : The ID of the project that you want to schedule the task in. Request The request should be a JSON object with a \"schedules\" property containing an array of objects with the following properties: code_name : The name of the code package to execute. payload : A string of data to pass to the code package on execution. Optionally, each object in the array can specify the following properties: start_at : The time the scheduled task should first be run. run_every : The amount of time, in seconds, between runs. By default, the task will only run once. run_every will return a 400 error if it is set to less than 60 . end_at : The time tasks will stop being queued. Should be a time or datetime. run_times : The number of times a task will run. priority : The priority queue to run the task in. Valid values are 0, 1, and 2. Task priority determines how much time a task may sit in queue. Higher values means tasks spend less time in the queue once they come off the schedule. Access to priorities depends on your selected IronWorker plan see plans . You must have access to higher priority levels in your chosen plan or your priority will automatically default back to 0. The standard/default priority is 0. cluster : cluster name ex: \"high-mem\" or \"dedicated\". This is a premium feature for customers for customers to have access to more powerful or custom built worker solutions. Dedicated worker clusters exist for users who want to reserve a set number of workers just for their queued tasks. If not set default is set to \"default\" which is the public IronWorker cluster. The request also needs to be sent with a \"Content-Type: application/json\" header, or it will respond with a 406 status code and a \"msg\" property explaining the missing header. Sample: { schedules : [ { payload : \"{\\\"x\\\": \\\"abc\\\", \\\"y\\\": \\\"def\\\"}\" , name : \"MyScheduledTask\" , code_name : \"MyWorker\" run_every : 3600 } ] } Response The response will be a JSON object containing a \"msg\" property that contains a description of the response and a \"schedules\" property that contains an array of objects, each with an \"id\" property that contains the scheduled task’s ID. Sample: { \"msg\" : \"Scheduled\" , \"schedules\" : [ { \"id\" : \"4eb1b490cddb136065000011\" } ] } Get Info About a Scheduled Task Endpoint GET /projects/ {Project ID} /schedules/ {Schedule ID} URL Parameters Project ID : The ID of the project that the scheduled task belongs to. Schedule ID : The ID of the scheduled task you want details on. Response The response will be a JSON object containing the details of the scheduled task. Sample: { \"id\" : \"4eb1b490cddb136065000011\" , \"created_at\" : \"2011-11-02T21:22:51Z\" , \"updated_at\" : \"2011-11-02T21:22:51Z\" , \"project_id\" : \"4eb1b46fcddb13606500000d\" , \"msg\" : \"Ran max times.\" , \"status\" : \"complete\" , \"code_name\" : \"MyWorker\" , \"delay\" : 10 , \"start_at\" : \"2011-11-02T21:22:34Z\" , \"end_at\" : \"2262-04-11T23:47:16Z\" , \"next_start\" : \"2011-11-02T21:22:34Z\" , \"last_run_time\" : \"2011-11-02T21:22:51Z\" , \"run_times\" : 1 , \"run_count\" : 1 } Cancel a Scheduled Task Endpoint POST /projects/ {Project ID} /schedules/ {Schedule ID} /cancel URL Parameters Project ID : The ID of the project that the scheduled task belongs to. Schedule ID : The ID of the scheduled task you want to cancel. Response The response will be a JSON object containing a message explaining whether the request was successful or not. Sample: { \"msg\" : \"Cancelled\" } Stacks List of available stacks List of stacks Endpoint GET /stacks Response The response will be a JSON object. Sample: { [ \"scala-2.9\" , \"ruby-2.1\" , \"ruby-1.9\" , \"python-3.2\" , \"python-2.7\" , \"php-5.4\" , \"node-0.10\" , \"java-1.7\" , \"mono-3.0\" , \"mono-2.10\" ] } "
}, {
"title": "IronWorker Local and Remote Builds",
"url": "/worker/reference/builds/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "A lot of users want to use packages, gems, and libraries that depend on binary extensions when working with IronWorker. These binaries need to be built to target the IronWorker environment, which can make the process of deploying a worker more complex. To make working with binary extensions easier, our CLI provides two different ways to build your workers: locally",
"body": "A lot of users want to use packages, gems, and libraries that depend on binary extensions when working with IronWorker. These binaries need to be built to target the IronWorker environment, which can make the process of deploying a worker more complex. To make working with binary extensions easier, our CLI provides two different ways to build your workers: locally (on your machine) and remotely (in a build worker on Iron.io's servers). Table of Contents Local Build Remote Build Resolve the Issue with Native Extensions Remote .worker File Local Build By default, workers are built locally. If your worker does not need any binary extensions or compiled components, building locally is the best choice. Just type Command Line $ iron_worker upload cool_feature and relax. The CLI will merge the directories, files, libraries, and modules you listed in your .worker file into a zip archive that is then uploaded to IronWorker using the API . Now you are able to queue or schedule tasks against the worker. Remote Build Resolve the Issue with Native Extensions When you worker requires a native extension or is written in a compiled language that produces a binary, it needs to be compiled against the IronWorker architecture. While you can compile everything manually against 64-bit (x86-64) Linux and write scripts to set up your worker environment, it's a lot easier to just let the worker build everything for you. This is what the remote build is for. It automatically creates a worker that will build the worker specified by your .worker file, builds your worker, and uploads it using the API. This allows you to run your build process entirely on IronWorker's infrastructure, so everything is automatically targeting the right environment. The only downside is that this type of build can take a couple of minutes to run, making it slower than a local build. To enable remote build, add the following line to your .worker file: .worker full_remote_build true or just .worker remote This forces to install all your dependencies in IronWorker environment . Remote .worker File Using HTTP link as your .worker file enables full remote build automatically. Command Line $ iron_worker upload http://my.site/my.worker This could be helpful when you want to load the worker from HTTP endpoint. In this case exec , file , gemfile , and deb directives are all prepended with the base URL of the .worker file. If the http://my.site/my.worker file looks like this: .worker exec \"my_exec\" file \"my_file\" deb \"my.deb\" gemfile \"Gemfile\" It will be read by the remote build worker as this: .worker exec \"http://my.site/my_exec\" file \"http://my.site/my_file\" deb \"http://my.site/my.deb\" gemfile \"http://my.site/Gemfile\" "
}, {
"title": "The IronWorker Command Line Interface",
"url": "/worker/reference/cli/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "PaaS providers like Heroku , AppFog , and App Engine have all standardised around the convention of using a command line interface to interact with your apps. In our effort to provide tools that work with developers' current work flow, IronWorker has created a command line tool to interact with the service. Table of Contents Installing Configuration Testing Your Workers",
"body": "PaaS providers like Heroku , AppFog , and App Engine have all standardised around the convention of using a command line interface to interact with your apps. In our effort to provide tools that work with developers' current work flow, IronWorker has created a command line tool to interact with the service. Table of Contents Installing Configuration Testing Your Workers Locally Creating and Uploading Code Packages Upload with Multiple Environments Queuing Tasks Scheduling Tasks Retrieving a Task's Log Installing The command line interface for IronWorker uses the IronWorkerNG gem , so you'll need to install both the gem and Ruby. To check if you have Ruby installed, run Command Line $ ruby -v If you don't have Ruby installed, you can get instructions on installing it from the Ruby website . Once Ruby is installed, you'll need the IronWorkerNG gem: Command Line $ gem install iron_worker_ng You should be all set up now. To check your installation, run the following: Command Line $ iron_worker -v Configuration The command line tool is really just the Ruby gem, so it follows the global configuration scheme that all official libraries use. You can configure the tool by creating an iron.json file in the directory of the .worker file, an .iron.json file in your home directory, or the environment variables. For example, to override the project ID for a single command, you could run the following: Command Line $ IRON_PROJECT_ID = new_project_id_here iron_worker upload myworker The same applies to the IRON_TOKEN environment variable. You can use .worker files to define workers that can then be uploaded or run using the command line tools. Testing Your Workers Locally It's a pain to upload every change in code without knowing if it works. To help ease that pain, we've created a command to run workers locally, on your machine. You can use the following command to run a worker: Command Line $ iron_worker run $WORKER Where $WORKER is replaced with the name your .worker file. For example, if your file is named my_worker.worker , you would use iron_worker run my_worker . If you need to test code that uses a payload, just include the payload or the path to a file containing the payload: Command Line $ # specify the payload inline $ iron_worker run $WORKER --payload '{\"this\": \"is a test\", \"that\": {\"test\": \"object test\"}}' $ # specify a file containing the payload $ iron_worker run $WORKER --payload-file /path/to/payload.json Important notes: The CLI offers the run command to try and help test and debug workers locally. Because it is so complicated to manage an environment, this may not function in every environment. Here are some scenarios in which you may not be able to use the run command: When running under Windows. When running compiled binaries or packages on OS X or 32-bit Linux. When using the deb feature in your .worker file under non-Debian systems: Ruby Code deb \"feature-package.deb\" Possible solution: install dpkg . Trying to use a dependency (like \"mono\") that is present in IronWorker's environment but not your local environment. For best results, we recommend using the run command in an environment that matches IronWorker's as closely as possible: 64-bit (x86-64) Ubuntu Linux, with the same pre-installed packages installed. Creating and Uploading Code Packages The command to upload a worker is: Command Line $ iron_worker upload $WORKER Where $WORKER is replaced by the name of your worker file, minus the .worker. Sometimes, you want to limit the number of parallel workers for any given task, to prevent external resources like databases or APIs from crashing under the weight of your workers' requests. We have a max_concurrency feature that lets you do just this. To use it, simply use the --max-concurrency option when uploading a worker, with the maximum number of workers that can be run in parallel: Command Line $ iron_worker upload $WORKER --max-concurrency 10 If you're worried about errors, your worker is idempotent (meaning that it can be run multiple times without affecting the result), and you'd like to automatically retry your worker if it errors out, you can use the retries and retries-delay options. retries allows you to specify the maximum number of times failed tasks will be re-run: Command Line $ iron_worker upload $WORKER --retries 5 You can also optionally specify the delay between retries by using retries-delay : Command Line $ iron_worker upload $WORKER --retries 5 --retries-delay 10 There are additional options available to the upload command; you can find a list of them by running iron_worker upload --help . All of these options can be mixed and matched at will to easily create very complex, specific behaviors. Upload with Multiple Environments It is common to want to use IronWorker across many different development environments. When uploading your worker you can specify an environment via the ** --env ** (or ** -e ** option). Command Line $ iron_worker upload helloworker --env development $ iron_worker upload helloworker --env staging $ iron_worker upload helloworker -e test $ iron_worker upload helloworker -e production We reccomend you create seperate projects for each development environment. Below is an example of a typical iron.json with multiple environments iron.json into multiple development environments via different project id's and tokens. { \"production\" : { \"token\" : \"AAAAAAAAAAAAAAAAAAAAAAAAAAA\" , \"project_id\" : \"000000000000000000000001\" }, \"staging\" : { \"token\" : \"BBBBBBBBBBBBBBBBBBBBBBBBBB\" , \"project_id\" : \"000000000000000000000002\" }, \"development\" : { \"token\" : \"CCCCCCCCCCCCCCCCCCCCCCCCCC\" , \"project_id\" : \"000000000000000000000003\" }, \"test\" : { \"token\" : \"DDDDDDDDDDDDDDDDDDDDDDDDDD\" , \"project_id\" : \"000000000000000000000004\" } } Queuing Tasks Testing workers no longer takes a script that creates a task to test with. Instead, you can queue tasks directly from the command line: Command Line $ iron_worker queue $WORKER [ --priority 0 | 1 | 2 ] [ --payload '{\"somekey\": \"some_value\", \"array\": [\"item1\", \"item2\"]}' ] Alternatively, you can specifiy a payload file, instead of providing the payload inline: Command Line $ iron_worker queue $WORKER --payload-file /path/to/payload/file.json Sometimes, you want a task to be queued after a delay. You can easily do this with the --delay option: Command Line $ iron_worker queue $WORKER --delay 60 The task will then be queued after the number of seconds passed to delay (one minute in the above example). If you want to limit a task to a certain run time below our one hour max, you can do that with the --timeout option: Command Line $ iron_worker queue $WORKER --timeout 1800 The task will automatically be killed after the number of seconds passed to timeout (half an hour in the above example). There are a lot of options when you queuing tasks that can be combined to get exactly the execution you need. You can find a list of these options by running iron_worker queue --help . Scheduling Tasks The command line tool also allows you to schedule tasks to be run repeatedly or at a later time, just as the gem would allow you to in a script. You can schedule a task using the following command: Command Line $ iron_worker schedule [ --start-at \"2013-01-01T00:00:00-04:00\" ] [ --run-times 4 ] [ --priority 0 | 1 | 2 ] [ --payload '{\"somekey\": \"some_value\"}' ] $WORKER You can find a list of options for the command by running iron_worker schedule --help . Retrieving a Task's Log You no longer have to write a script to check the log of your tasks. You can install call the following command: Command Line $ iron_worker log [ OPTIONS ] You can find a list of options for the command by running iron_worker log --help . "
}, {
"title": "Configuring the Official Client Libraries",
"url": "/worker/reference/configuration/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Many of the client libraries make use of a global configuration scheme for all of Iron.io services. This approach lets you set and manage your tokens and project IDs in a centralized manner and make them available across all of Iron.io's services, even across workspaces. This scheme allows you to spend less time on configuration issues and more on writing",
"body": "Many of the client libraries make use of a global configuration scheme for all of Iron.io services. This approach lets you set and manage your tokens and project IDs in a centralized manner and make them available across all of Iron.io's services, even across workspaces. This scheme allows you to spend less time on configuration issues and more on writing code. It also supports the design pattern that calls for strict separation of configuration information from application code. The two most common variables used in configuration are the project ID and the token . The project ID is a unique identifier for your project and can be found in the HUD . The token is one of your OAuth2 tokens, which can be found on their own page in the HUD. Table of Contents Quick Start About the Scheme The Overall Hierarchy The Environment Variables The File Hierarchy The JSON Hierarchy Example Setting Host Example Accepted Values Quick Start Create a file called .iron.json in your home directory (i.e., ~/.iron.json ) and enter your Iron.io credentials: .iron.json { \"token\" : \"MY_TOKEN\" , \"project_id\" : \"MY_PROJECT_ID\" } The project_id you use will be the default project to use. You can always override this in your code. Alternatively, you can set the following environment variables: IRON_TOKEN = MY_TOKEN IRON_PROJECT_ID = MY_PROJECT_ID That's it, now you can get started. About the Scheme The configuration scheme consists of three hierarchies: the file hierarchy, the JSON hierarchy, and the overall hierarchy. By understanding these three hierarchies and how clients determine the final configuration values, you can build a powerful system that saves you redundant configuration while allowing edge cases. The Overall Hierarchy The overall hierarchy is simple to understand: local takes precedence over global. The configuration is constructed as follows: The global configuration file sets the defaults according to the file hierarchy. The global environment variables overwrite the global configuration file's values. The product-specific environment variables overwrite everything before them. The local configuration file overwrites everything before it according to the file hierarchy. The configuration file specified when instantiating the client library overwrites everything before it according to the file hierarchy. The arguments passed when instantiating the client library overwrite everything before them. The Client's Environment Variables set in iron.json The environment variables the scheme looks for are all of the same formula: the camel-cased product name is switched to an underscore (\"IronWorker\" becomes \"iron_worker\") and converted to be all capital letters. For the global environment variables, \"IRON\" is used by itself. The value being loaded is then joined by an underscore to the name, and again capitalised. For example, to retrieve the OAuth token, the client looks for \"IRON_TOKEN\". In the case of product-specific variables (which override global variables), it would be \"IRON_WORKER_TOKEN\" (for IronWorker). Accepted Values The configuration scheme looks for the following values: project_id : The ID of the project to use for requests. token : The OAuth token that should be used to authenticate requests. Can be found in the HUD . host : The domain name the API can be located at. Defaults to a product-specific value, but always using Amazon's cloud. protocol : The protocol that will be used to communicate with the API. Defaults to \"https\", which should be sufficient for 99% of users. port : The port to connect to the API through. Defaults to 443, which should be sufficient for 99% of users. api_version : The version of the API to connect through. Defaults to the version supported by the client. End-users should probably never change this. Note that only the project_id and token values need to be set. They do not need to be set at every level of the configuration, but they must be set at least once by the levels that are used in any given configuration. It is recommended that you specify a default project_id and token in your iron.json file. The File Hierarchy The hierarchy of files is simple enough: if a file named .iron.json exists in your home folder, that will provide the defaults. if a file named iron.json exists in the same directory as the script being run, that will be used to overwrite the values from the .iron.json file in your home folder. Any values in iron.json that are not found in .iron.json will be added; any values in .iron.json that are not found in iron.json will be left alone; any values in .iron.json that are found in iron.json will be replaced with the values in iron.json . This allows a lot of flexibility: you can specify a token that will be used globally (in .iron.json ), then specify the project ID for each project in its own iron.json file. You can set a default project ID, but overwrite it for that one project that uses a different project ID. The JSON Hierarchy Each file consists of a single JSON object, potentially with many sub-objects. The JSON hierarchy works in a similar manner to the file hierarchy: the top level provides the defaults. If the top level contains a JSON object whose key is an Iron.io service ( iron_worker , iron_mq , or iron_cache ), that will be used to overwrite those defaults when one of their clients loads the config file. This allows you to define a project ID once and have two of the services use it, but have the third use a different project ID. Example In the event that you wanted to set a token that would be used globally, you would set ~/.iron.json to look like this: .iron.json { \"token\" : \"YOUR TOKEN HERE\" } To follow this up by setting your project ID for each project, you would create an iron.json file in each project's directory: iron.json { \"project_id\" : \"PROJECT ID HERE\" } If, for one project, you want to use a different token, simply include it in that project's iron.json file: iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" } Now for that project and that project only , the new token will be used. If you want all your IronCache projects to use a different project ID, you can put that in the ~/.iron.json file: .iron.json { \"project_id\" : \"GLOBAL PROJECT ID\" , \"iron_cache\" : { \"project_id\" : \"IRONCACHE ONLY PROJECT ID\" } } If you don't want to write things to disk or on Heroku or a similar platform that has integrated with Iron.io to provide your project ID and token automatically, the library will pick them up for you automatically. Setting Host It is useful to quickly change your host in cases where your region has gone down. If want to set the Host, Post, and Protocol specifically, simply include those keys in that project's iron.json file: iron.json { \"project_id\" : \"PROJECT ID HERE\" , \"token\" : \"YOUR TOKEN HERE\" \"port\" : 443 , \"protocol\" : \"https\" , \"host\" : \"mq-rackspace-ord.iron.io\" } "
}, {
"title": "IronWorker Configuration Variables",
"url": "/worker/reference/configuration-variables/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "There are three primary methods of setting important configuration information for your IronWorker, setting the config variable , sending your worker the variables via the payload/params, and finally through our hud interface Table of Contents Setting Config Variables via Task Payload Setting Config Variables via File (yaml & json) Setting Config Variables via Iron.io HUD Set config variables via Worker's",
"body": " There are three primary methods of setting important configuration information for your IronWorker, setting the config variable , sending your worker the variables via the payload/params, and finally through our hud interface Table of Contents Setting Config Variables via Task Payload Setting Config Variables via File (yaml & json) Setting Config Variables via Iron.io HUD Set config variables via Worker's task payload/params When queueing up a task you can easily pass configuration information via the payload. Each enviroment has it's own way to access parameters inside a worker. in ruby you access it by calling the params variable, in php you access it via ``` php $postdata = file_get_contents(\"php://input\"); ``` This is preferable when your worker may have different variations, adapters or strategies when receiving different types of payload. That's it. The next example walks you through setting a static configuration on you IronWorker upon upload Set config variables on upload via .yml or .json First create a .yml or a .json file and save it within your worker directory or directory where you will be running your IronWorker commandline tools from ex: config.yml or config.json config.json { \"MY_CONFIG_VARIABLE\" : 12345678901234567890 , \"MY_CONFIG_VARIABLE2\" : \"ASDGFHJTHFVCBDXFGHSF\" ,} config.yml \"MY_CONFIG_VARIABLE\" : 12345678901234567890 \"MY_CONFIG_VARIABLE2\" : \"ASDGFHJTHFVCBDXFGHSF Next run your standard upload command iron_worker upload --worker-config config.yml and you should see in the upload logs that your configuration variables were uploaded with your worker When your task is run, a file containing this configuration will be available to your worker and the location of this file will be provided via the program args right after -config . For example, to load your config with Ruby: require 'json' config = {} ARGV . each_with_index do | arg , i | if arg == \"-config\" config = JSON . parse ( IO . read ( ARGV [ i + 1 ] )) end end Set config variables in the Iron.io HUD aka dashboard it is often times useful to change configuration variables without having to reupload your code. We allow you to do so visually with our HUD (dashboard) by following two simple steps. Navigate to the hud http://hud.iron.io . next navigate to your uploaded code's information by clicking on the code tab and your worker's name. NOTE: for those who remotely build their workers, please make sure you select your worker and not the remote build process Through your Worker Code's dashboard you have a useful box where you can change your configuration information in yml format! i.e Key seperated by a colon and the value without quotations and no commas delimiting the values. click edit and...voila! your worker now has updated configuration variables without having to reupload your worker or enter the commandline! "
}, {
"title": "Local Disk Storage",
"url": "/worker/reference/disk-storage/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Workers can make use of a large amount of local temporary storage space that's dedicated on a per-worker basis. You can perform almost any file operations with it that you could within a local environment. You access this storage by making use of the variable user_dir in the worker. This variable contains the path of the directory your worker has",
"body": "Workers can make use of a large amount of local temporary storage space that's dedicated on a per-worker basis. You can perform almost any file operations with it that you could within a local environment. You access this storage by making use of the variable user_dir in the worker. This variable contains the path of the directory your worker has write access to. Saving Files to Disk Here's an example that downloads a file from the web and saves it in local storage. The log snippet just logs the contents of user_dir . local_file.rb class S3Worker < IronWorker :: Base filepath = user_dir + \"ironman.jpg\" File . open ( filepath , 'wb' ) do | fo | fo . write open ( \"http://www.iron.io/assets/banner-mq-ironio-robot.png\" ) . read end user_files = %x[ls #{ user_dir . inspect } ] log \" \\n Local Temporary Storage ('user_dir')\" log \" #{ user_files } \" end Location of Uploaded Files and Folders The user_dir directory also contains any uploaded files that you've included with your code. Note that any folders or nested files will appear at the top level. For example, let's say you upload a file with the following structure: merge \"../site_stats/client.rb\" This file will be placed in the user_dir directory. You can make use of it there, create local/remote path references (using the local/remote query switch in your worker), or replicate the path and move the file there. (We recommend one of the first two options.) user_dir/ ... client.rb ... In Ruby, to make use of the file (in the case of a code file), you would use a require_relative statement with the base path. require_relative './client' Use Cases Typical use cases might include: Downloading a large product catalog or log file, parsing it, processing the data, and inserting the new data into a database Downloading an image from S3, modifying it, and re-uploading it, Splitting up a large video file or the results of a website crawl in S3, then creating and queuing multiple workers to process each video or page slice. Best Practices This is temporary storage and only available while the worker is running. You'll want to make use of databases and object stores to persist any data the worker produces. We recommend that you not pass any large data objects or data files in to workers, but instead use object storage solutions like AWS S3 or databases. To do this, just upload your data to S3 or store it in the database from your app, then pass the identifier of the object to the worker. The worker can then access the data from the data store. This is more efficient in terms of worker management and better for exception handling. Examples You can find more examples of making use of local disk storage here: Image Processing Example on Github Image Processing Example with Carrierwave on Github S3 Example on Github S3 Example 2 on Github "
}, {
"title": ".worker Files",
"url": "/worker/reference/dotworker/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "We like to encourage our users to think of workers as independent chunks of functionality, reusable building blocks that they can then build an application out of. .worker (pronounced dotworker) files serve to reinforce this idea, by allowing users to define their workers outside of their application code. .worker files are pretty easy to understand: their purpose is to construct",
"body": "We like to encourage our users to think of workers as independent chunks of functionality, reusable building blocks that they can then build an application out of. .worker (pronounced dotworker) files serve to reinforce this idea, by allowing users to define their workers outside of their application code. .worker files are pretty easy to understand: their purpose is to construct the code packages that are uploaded to the IronWorker cloud. This is done in almost precisely the same way that code packages are constructed in scripts, but .worker files can be bundled with the workers and stored in their own git repository, or moved about without any worry that unnecessary code lingers in your application, cluttering it, or that you're missing code that it will take for your worker to run. Workers can finally be their own self-contained units. Table of Contents Making a Worker File Structure Syntax Reference Making a Worker File A common misconception is that .worker files are named \".worker\" (e.g., ~/my_worker/.worker ). This is not the case. Instead, the files are given unique names that will identify your worker in our system. So, for example, cool_worker.worker will create a worker named \"cool_worker\". Note: You should never have a file named \".worker\". They should always be given a unique, recognisable name: \"HelloWorld.worker\", \"SendMail.worker\", \"do_something_awesome.worker\", etc. Structure The .worker file mirrors the code you would use to construct a package in your application. Here's a simple example: .worker runtime \"ruby\" stack \"ruby-1.9\" exec \"hello_worker.rb\" This .worker file defines a code package that consists of a single hello_worker.rb script, which is what will be executed when your worker is run. You can also add the files your worker is dependent upon: .worker runtime \"ruby\" stack \"ruby-1.9\" exec \"hello_worker.rb\" file \"dependency.rb\" file \"config.json\" That worker will have access to the dependency.rb and config.json files after it's uploaded. Everything you can do in your application to construct a code package, you can do in a .worker file. Here's an example that includes a gem: .worker runtime \"ruby\" stack \"ruby-1.9\" exec \"hello_worker.rb\" file \"dependency.rb\" file \"config.json\" gem \"mongoid\" Not only will this worker have access to dependency.rb and config.json , it will have access to the mongoid gem, as well. Note: Gems merged with the .worker file will not be automatically required in your worker files, just as they are not when you merge gems in your application's code. Syntax Reference The following syntax is valid in .worker files: Keyword Runtime Purpose Arguments runtime all Set worker's runtime \"binary\" How it runs \"go\" How it runs \"java\" How it runs \"mono\" How it runs \"node\" How it runs \"php\" How it runs \"python\" How it runs \"ruby\" How it runs stack all Set worker's stack \"ruby-1.9\" \"ruby-2.1\" \"java-1.7\" \"scala-2.9\" \"mono-2.10\" \"mono-3.0\" \"php-5.4\" \"node-0.10\" \"python-2.7\" \"python-3.2\" name all Set worker's name The name to give the worker set_env all Sets an environment variable accessible within your worker. set_env \"KEY\", \"VALUE\" full_remote_build all Activates full remote build mode. true or false , defaults to false build node package dependencies remotely through uploaded package.json \"npm install\" exec all Merge a file and designate it as the file to be executed when the worker is run. You may only have one file designated as the executable per worker. The path to the file The name to give the worker. Defaults to a camel-cased version of the file name will be used. (optional) file all Merge a file into the code package. The path to the file The path the file should be stored under in the package. Defaults to the root directory. (optional) dir all Merge an entire directory (and all its contents) into the code package. The path to the directory The path the directory should be stored under in the package. Defaults to the root directory. (optional) deb all Merge a x86_64 deb package into the code package. Note: dependencies will not be handled. The path to the deb file gem ruby Merge a gem with its dependencies. Note: binary extensions will not be merged, as they are not supported. The name of the gem to merge, as it appears in a require statement The version requirement for the gem. Defaults to \">= 0\". (optional) gemfile ruby Merge all the gems from the specified Gemfile. The path to the Gemfile The groups to include in the merge. Defaults to the \"default\" group—the top level. (optional) . Example: gemfile 'Gemfile', 'default', 'othergroup' jar java Merge a jar into code package. Note: it'll be available in worker's classpath. The path to jar file pip python Merge a pip package with its dependencies. The name of the pip package to merge. The version requirement for the pip package. Defaults to latest available at pypi. "
}, {
"title": "IronWorker Environment",
"url": "/worker/reference/environment/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Table of Contents Operating System Runtime Environments Installed Linux Packages Maximum Data Payload Memory per Worker Local Disk Space per Worker Maximum Run Time per Worker Priority Queue Management Maximum Scheduled Tasks per Project Scheduled Task Frequency Security Groups and IP Ranges Runtime Environments Below you can see the pre-installed versions of languages/tools in the IronWorker environment in different stacks.",
"body": " Table of Contents Operating System Runtime Environments Installed Linux Packages Maximum Data Payload Memory per Worker Local Disk Space per Worker Maximum Run Time per Worker Priority Queue Management Maximum Scheduled Tasks per Project Scheduled Task Frequency Security Groups and IP Ranges Runtime Environments Below you can see the pre-installed versions of languages/tools in the IronWorker environment in different stacks. To use, add 'stack \"stack_name\"' to your .worker file. Example: runtime \"ruby\" stack \"ruby-2.1\" exec \"hello_worker.rb\" Stack name Language/Tool Version Operating System Deb packages default* Ruby-1.9.3p194, java-1.7, scala-2.9, mono-2.10, php-5.3, node-0.8, python-2.7 Ubuntu 12.10 Packages ruby-1.9 Ruby 1.9.3p194 Ubuntu 12.10 Packages ruby-2.1 Ruby 2.1.0p0 Ubuntu 12.10 Packages java-1.7 Java 1.7.0_51 OpenJDK Ubuntu 12.10 Packages java-1.8 Java 1.8.0_20 Ubuntu 12.04.5 Packages scala-2.9 Scala 2.9.2 Ubuntu 12.10 Packages mono-2.10 Mono JIT 2.10.8.1 Ubuntu 12.10 Packages mono-3.0 Mono JIT 3.0. Ubuntu 12.10 Packages mono-3.6 Mono JIT 3.6. Ubuntu 12.10 Packages php-5.4 PHP 5.4.26 Ubuntu 12.10 Packages php-5.5 PHP 5.5.10 Ubuntu 12.04.5 Packages node-0.10 Node.js 0.10 Ubuntu 12.10 Packages python-2.7 Python 2.7.6 Ubuntu 12.10 Packages python-3.2 Python 3.2.5 Ubuntu 12.10 Packages ffmpeg-2.3 ffmpeg-2.3, GPAC-0.5.1, php-5.3, node-0.10, ruby-1.9.3p0, python-2.7, x264-0.142.x Ubuntu 12.04.5 Packages *default stack loading when a \"stack\" option is not declared in .worker file The operating system and version information is provided for completeness and transparency. We recommend, however, you avoid binding your workers to specifics of the OS as much as possible. Note: It may be possible to update the language by adding related deb packages to your worker although you should go this route only if necessary. Use of earlier versions, especially major versions, may run into difficulties. Installed Linux Packages IronWorker contains several popular Linux packages as part of the standard worker environment. Package Full Name Purpose ImageMagick ImageMagick Image Processing Image processing FreeImage The FreeImage Project Image processing SoX Sound eXchange Library Sound processing cURL Client URL Request Library URL file processing These are included because they are common binary libraries. Other binary libraries and files can be included as part of your worker code package, though you'll first need to compile them to target Linux x64 architectures. If you don't see what you need here, please contact us and tell us what you're looking for. If it's a common/popular package, we can certainly look to include it. Maximum Data Payload The following is the maximum data payload that can be passed to IronWorker. A data payload that exceeds this size will generate an error response from the API. Maximum Data Payload: 64KB Tip: We recommend that you avoid sending large payloads with your workers. Instead use a data store to hold the data and then pass an ID or reference to the worker. The worker can grab the data and then do its processing. It's more efficient on the API as well as better in terms of creating atomic/stateless processing. Memory per Worker The standard worker sandbox environment contains a certain amount of accessible memory. This amount should be sufficient for almost all workloads. We are working on a super worker environment that would allow greater memory allocations. Please contact us if you have specific needs here. Memory per Worker: ~ 320MB Tip: We recommend distributing workloads over multiple workers—not only for better resource management, but also to take advantage of the massive concurrency enabled by a cloud worker system. Local Disk Space per Worker Each worker task has local disk space available to it for use on a temporary basis while the worker is running. You have full read/write privileges to create directories and files inside this space, and can perform most ordinary file operations. This directory is used as the current working directory (\" . \") when executing your workers. Local Disk Space: 10GB Maximum Run Time per Worker There is a system-wide limit for the maximum length a task may run. Tasks that exceed this limit will be terminated and will have timeout as their status. Max Worker Run Time: 3600 seconds (60 minutes) Tip: You should design your tasks to be moderate in terms of the length of time they take to run. If operations are small in nature (seconds or milliseconds) then you'll want to group them together so as to amortize the worker setup costs. Likewise, if they are long-running operations, you should break them up into a number of workers. Note that you can chain together workers as well as use IronMQ, scheduled jobs, and datastores to orchestrate a complex series or sequence of tasks. Priority Queue Management Each priority (p0, p1, p2) has a targeted maximum time limit for tasks sitting in the queue. Average queue times will typically be less than those listed on the pricing page. High numbers of tasks, however, could raise those average queue times for all users. To keep the processing time for high priority jobs down, per user capacities are in place for high priority queues. Limits are on per-queue basis and are reset hourly. High priority tasks that exceed the limit, are queued at the next highest priority. Only under high overall system load should queue times for tasks exceeding the capacity extend beyond the initial targeted time limits. Usage rates will be based on the actual priority tasks run on, not the priority initially queued. Priority Capacity Per Hour Per User p2 100 p1 250 Maximum Scheduled Tasks per Project The following is the default number of scheduled tasks. It should be sufficient for even the largest projects. If you would like this number increased, however, please feel free to contact us. Max Scheduled Tasks: 100 Tip: A common mistake is to create scheduled jobs on a per user or per item basis. Instead, use scheduled jobs as master tasks that orchestrate activities around sets of users or items. When scheduled tasks run, they can access databases to get a list of actions to perform and then queue up one or more workers to handle the set. View the page on scheduling for more information on scheduling patterns and best practices. Scheduled Task Frequency Tasks can be scheduled to run every N seconds or more specifying N using the run_every parameter and where N > 60. (The minimum frequency is every 60 seconds.) Note: A task may be executed a short time after its scheduled frequency depending on the priority level. (Scheduled tasks can be given a priority; higher priorities can reduce the maximum time allowed in queue.) Security Groups and IP Ranges IronWorker provides an AWS security group and IP ranges in the event users want to isolate AWS EC2, RDS, or other services to these groups/ranges. EC2 Security Group Account ID Security Group String simple_worker_sg 7227-1646-5567 722716465567/simple_worker_sg "
}, {
"title": "IronWorker Reference",
"url": "/worker/reference/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "The IronWorker reference documentation contains all the low-level information about IronWorker. Every little detail has been recorded here for you. REST/HTTP API Every endpoint, every parameter of our API is at your fingertips. Environment Know exactly what environment your workers will be executed in before you deploy them. Configuration Everything you need to know to make the IronWorker client libraries",
"body": "The IronWorker reference documentation contains all the low-level information about IronWorker. Every little detail has been recorded here for you. REST/HTTP API Every endpoint, every parameter of our API is at your fingertips. Environment Know exactly what environment your workers will be executed in before you deploy them. Configuration Everything you need to know to make the IronWorker client libraries work they way you want them to. .worker Files Use these custom spec files to make your workers independent and self-contained, easily stored in a repository. CLI All the arguments and commands in the IronWorker command line interface, at your disposal. Something Missing? Can't find the information you need here? Our engineers are always available and will be happy to answer questions. "
}, {
"title": "The IronWorker's Payload",
"url": "/worker/reference/payload/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Workers are just sections of discrete, modular code that you queue tasks against. It's rare, though, that you want to run the same code repeatedly with no changes or variables. To address this, IronWorker has the concept of a \"payload\". The payload is the same thing, conceptually, as an argument or variable that is passed to a function, method, or",
"body": "Workers are just sections of discrete, modular code that you queue tasks against. It's rare, though, that you want to run the same code repeatedly with no changes or variables. To address this, IronWorker has the concept of a \"payload\". The payload is the same thing, conceptually, as an argument or variable that is passed to a function, method, or command — just a piece of information you want to supply at runtime and make available in the worker itself. Client is able to specify payload for queued or scheduled workers. Payloads are strings. But usually we suggest to use JSON format. Also, when using ruby , php , and node runtimes JSON payload will be automatically parsed. Table of Contents Get Payload in a Worker Other Information Payload Filtering in the HUD Get Payload in a Worker When payload is posted to a worker when using the queue or schedule API endpoints, then it is stored in the database. Before your worker is launched in the IronWorker environment , this payload is then stored in your worker's runtime directory. The location of that file is passed to the worker using the -payload command line flag. To get the contents of your payload, you need to: Read the -payload flag using ARGV (or whatever your language uses to read command line flags) Open and read the file specified by the -payload flag Parse the contents of the file (for example, if you encoded the payload when queuing the task) Workers that use either ruby or php runtimes have more possibilities to access the payload. Access to a Payload in Ruby Runtime Ruby workers have access to special methods to obtain the payload. payload # string representation of payload params # json parsed payload # you can also access the following config # your configuration variables see the configuration variables page for more info iron_task_id # a worker's own task id, useful for checking status via api. If specified payload is in a JSON format it will be parsed automatically into params. Access to a Payload in PHP Runtime The payload in PHP runtime is accessible by calling method getPayload() . If payload is a parsable JSON string it will be converted automatically. <?php payload = getPayload (); // parsed JSON or string ?> <!-- you can also access the following --> <?php config = getConfig (); // parsed JSON or string ?> Access to a Payload in Node.js Runtime var worker = require ( 'node_helper' ); console . log ( \"params:\" , worker . params ); // you can also access the following console . log ( \"config:\" , worker . config ); console . log ( \"task_id:\" , worker . task_id ); Other Information Your worker will also be passed -id and -d command line arguments. The value of -id will be the ID of the task that is currently being executed, and the value of -d will be the user-writable directory that can be used for temporary storage for the duration of the task's execution. Payload Filtering in the HUD You can see your tasks payload in the HUD . Go to Worker project's section, click on \"Tasks\" tab and then on one of your workers where you use payload. Click on \"Details\" link on task. HUD filters payload by the next rule. It looks for all keys on any level of nesting which contains substrings: token security password secret pass connectionstring api_key license and change their values to [FILTERED] . Example: original payload { \"database\" : { \"connectionstring\" : \"postgres://usr:pass@host:port/db\" }, \"iron\" : { \"project_id\" : \"1234567890\" , \"token\" : \"TOKEN1234\" }, \"3rdparty_service\" : { \"user\" : \"username\" , \"service_pass\" : \"userp4ss\" } } Example: payload visible through the HUD { \"database\" : { \"connectionstring\" : \"[FILTERED]\" }, \"iron\" : { \"project_id\" : \"1234567890\" , \"token\" : \"[FILTERED]\" }, \"3rdparty_service\" : { \"user\" : \"username\" , \"service_pass\" : \"[FILTERED]\" } } "
}, {
"title": "Securing Your Workers",
"url": "/worker/reference/security/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Years of work and many layers are involved in ensuring cloud resources are secure. Iron.io inherits these industrial-strength security measures through the infrastructures we operate on. These measures include physical, network, data continuity, service access and service-specific protections, among others. Please refer to the AWS Security Whitepaper for a detailed description of this extensive list. Iron.io takes further measures to",
"body": "Years of work and many layers are involved in ensuring cloud resources are secure. Iron.io inherits these industrial-strength security measures through the infrastructures we operate on. These measures include physical, network, data continuity, service access and service-specific protections, among others. Please refer to the AWS Security Whitepaper for a detailed description of this extensive list. Iron.io takes further measures to isolate and protect processes at the IronWorker platform level, just as we isolate and protect instances at the infrastructure level. These steps include access restrictions, process isolation, resource monitoring/management, service restrictions, and more. Security Measures OAuth2 Authorization Iron's API uses OAuth 2 , an industry-standard authentication scheme, to securely authenticate API requests. This scheme relies on SSL for security, instead of requiring your application to do cryptographic signing directly. This makes the API easy to use without compromising security. HTTPS Encryption The IronWorker API is the standard method of interacting with workers and projects. HTTPS encryption is the default access method for the API and the recommended approach for all access requests. All the client libraries provided by Iron.io use HTTPS encryption by default. This renders most common packet-interception attacks useless. Process Isolation IronWorker makes use of OS-level sandboxing to keep processes isolated from system influences and other processes in the system. Each IronWorker process runs in a virtualized container that appears to processes as a unique, minimal Ubuntu installation. Runtime limits are placed on the amount of RAM and disk each worker process may consume. Workers that exceed the memory limit will error out and exit. CPU allocation is balanced across IronWorker processes, but may burst to higher CPU allocation depending IronWorker system load. Resource Management IronWorker uses process-level runtime monitoring/management to ensure that workers receive a standard set of compute resources. Workers may utilize more resources depending on system load (which could introduce slight performance variations across workers) but never less than the standard level. SMTP and Other Service Restrictions IronWorker, by design, does not provide SMTP host services. Workers must use third-party services such as GMail , SendGrid , Amazon SES , or other service providers. Users must also adhere to Iron.io's Use Policy . AWS Security Groups and IP Ranges IronWorker provides an AWS security group and IP ranges in the event users want to isolate AWS EC2, RDS, or other services to these groups/ranges. Please note that this security group only works in the US East region. EC2 Security Group Account ID Security Group String Security Group ID simple_worker_sg 7227-1646-5567 722716465567/simple_worker_sg sg-0d500c64 Accessing AWS RDS Resource When accessing Amazon RDS resources please use the private ip address of your instances rather than the public DNS url that amazon provides. Retrieve the Private IP of your RDS instance navigate to your [https://console.aws.amazon.com](Amazon Web Services Console) and copy your public endpoint. Ping the public endpoint in your commandline using the ping command to get the private ip address. example: ping exampledb.XXXX.us-east-2.rds.amazonaws.com . note to omit the port when running this command. Use this ip address you get that comes back as your connection endpoint. Accessing AWS EC2 When accessing Amazon EC2 resources again use the private ip address of your instances rather than the public dns url that amazon provides. AWS's EC2 dashboard makes accessing this ip simpler than the previous example. Retrieve the private ip of your RDS resource navigate to your [https://console.aws.amazon.com](Amazon Web Services Console) and copy your public endpoint. Use the ip address that is available on this view Security Guidelines/Best Practices Environment Variables/Code Separation Avoid including any sensitive data or credentials within your code. Instead, include them as part of the data payload . This is in keeping with the 12-Factor app tenet regarding Config and its guidance on strict separation of config from code . Create Worker-Specific Credentials Make use of worker-specific credentials that only your workers depend on. For example, database users can be set up specifically for one or more workers. Additional auth tokens can be created for API services. Restricting/limiting these credentials to only the services/tables/capabilities workers need will provide an added level of isolation. It also makes changing/rotating credentials easier. Encrypt Data Payloads Consider encrypting your sensitive data before including it as part of the payload, and then decrypt it in the worker. This measure requires an attacker compromise both the payload and the worker in order to gain access to the data. Restrict Logging Do not log sensitive data. This includes sending information to STDOUT, as STDOUT is included in IronWorker's logs. If certain ID information is needed then provide an excerpt of the data (e.g xxxx4f689a5 or 32e78f....) that can be used for identification purposes. Questions/Concerns About Security Issues We've taken measures to ensure the security of your data in our systems, and we're working hard to educate customers on how to make the most of that security. Our mission is to take the stress out of managing cloud infrastructure, and that includes concerns about security and compromised data. If you have any questions, please do not hesitate to get in touch with us. We encourage the dialogue and want to do everything we can to ensure the safety of your data. Enter a support ticket or join the public support chat room . It's staffed almost constantly, around the clock, and we'd be happy to answer questions or provide advice on a case-by-case basis. "
}, {
"title": "Scheduling Tasks",
"url": "/worker/scheduling/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "IronWorker tasks are flexible; they don't have to be queued immediately. Rather, you can schedule a task to be queued at a specific time or after a set amount of time. This article shows you how to create them using the iron_worker_ng Ruby gem Scheduled jobs are separate resources than queued tasks. When scheduled tasks run, they queue a task",
"body": "IronWorker tasks are flexible; they don't have to be queued immediately. Rather, you can schedule a task to be queued at a specific time or after a set amount of time. This article shows you how to create them using the iron_worker_ng Ruby gem Scheduled jobs are separate resources than queued tasks. When scheduled tasks run, they queue a task to execute the worker code. The Scheduled Task has a Scheduled ID. The task that executes is separate and has a distinct Task ID. You monitor Scheduled Tasks in the Schedule tab in the HUD. Tasks that subsequently queued can be monitored within the Task tab. Table of Contents Schedule Task API Reference Scheduling Best Practices Schedule Task via HUD/Dashboard Scheduling Examples with Ruby Client API Reference Endpoint POST /projects/ {Project ID} /schedules URL Parameters Project ID : The ID of the project that you want to schedule the task in. Request The request should be a JSON object with a \"schedules\" property containing an array of objects with the following properties: code_name : The name of the code package to execute. payload : A string of data to pass to the code package on execution. Optionally, each object in the array can specify the following properties: start_at : The time the scheduled task should first be run. run_every : The amount of time, in seconds, between runs. By default, the task will only run once. run every will return a 400 error if it is set to <a href=\"/worker/reference/environment/#minimum run every time\">less than 60. end_at : The time tasks will stop being queued. Should be a time or datetime. run_times : The number of times a task will run. priority : The priority queue to run the task in. Valid values are 0, 1, and 2. Task priority determines how much time a task may sit in queue. Higher values means tasks spend less time in the queue once they come off the schedule. Access to priorities depends on your selected IronWorker plan see plans . You must have access to higher priority levels in your chosen plan or your priority will automatically default back to 0. The standard/default priority is 0. timeout : The maximum runtime of your task in seconds. No task can exceed 3600 seconds (60 minutes). The default is 3600 but can be set to a shorter duration. delay : The number of seconds to delay before scheduling the tasks. Default is 0. task_delay : The number of seconds to delay before actually queuing the task. Default is 0. cluster : cluster name ex: \"high-mem\" or \"dedicated\". This is a premium feature for customers for customers to have access to more powerful or custom built worker solutions. Dedicated worker clusters exist for users who want to reserve a set number of workers just for their queued tasks. If not set default is set to \"default\" which is the public IronWorker cluster. Best Practices Many Tasks To Run in Future - If you have lots of the same individual tasks to run in the future (sending emails to users, for example), we suggest not creating individual scheduled tasks (or queuing lots of tasks with delays). It's better to create a scheduled task that repeats on a regular basis. This scheduled task should then query a data base or datastore for the users to email (or actions to take). It can then spin up one or more sub-tasks to execute the work (creating one task per action or better yet, allocating a certain number of data slices to each task to better amortize the setup cost of a task). Here are a few posts on the topic: Pattern: Creating Task-Level Workers at Runtime Anti-Pattern: Lots of Scheduled Jobs Finer-Grained Scheduling - There may be the need to run tasks on specific days or dates (end of month or Tuesday and Thursday). We recommend creating a scheduled job that runs frequently and then does a quick check to see if the scheduling condition is met. For example, running daily and checking if it's the last day of the month or a Tuesday or Thursday. If so, then continue with the task; if not, then exit. (We're looking at addressing finer-grain scheduling options but don't accommodate it at present.) Specific times are expressed as time objects and so can be in UTC or local time. They'll be recorded in the system as UTC but displayed in the HUD/dashboard in the timezone that you specify for the HUD. Note: Scheduled tasks may not be executed at the scheduled time; they will simply be placed on the queue at that time. Depending on the circumstances, a task may be executed a short time after it is scheduled to be. Tasks will never be executed before their schedule, however. (Scheduled tasks can be given a priority; higher priorities can reduce the time in queue.) Schedule Task via HUD/Dashboard We've added a easy to use GUI to help you create and manage your schedules. Click on the create schedule button on our schedule's page in the dashboard Fill in the relevant parameters for the scheduled task you want to create Click on the create schedule button on our schedule's page in the dashboard You can now view your current, past, and deleted scheduled in the list view. If you click on a schedule you have the ability to view the details and edit/update your schedules accodingly. note: updating a schedule will delete the old one and create a new one. Scheduling with the Ruby Client Scheduling a task to be queued at a specific time is easy: schedule = client . schedules . create ( 'MyWorker' , payload , { :start_at => Time . now + 3600 }) To run on a regular schedule, then just include an interval to repeat execution of the scheduled task. (This is useful, for example, for sending out daily notifications or cleaning up old database entries.) schedule = client . schedules . create ( 'MyWorker' , payload , { :run_every => 3600 }) # will be run every hour These repeating tasks can also be set to queued at a specific start time: schedule = client . schedules . create ( 'MyWorker' , payload , { :start_at => Time . now + 3600 , :run_every => 3600 }) # will be run every hour, starting an hour from now You can also schedule a task to be queued after a certain delay: schedule = client . tasks . create ( 'MyWorker' , payload , { :delay => 3600 }) # queues the task after one hour Note: You can use a delay for a scheduled job and for a queued task . The difference is a delayed scheduled task will kick off a regular task whereas a delayed task executes directly (after the delay). We suggest using a delayed task if the delay time is brief; a scheduled task if it's longer into the future and/or repeats frequently. See the note below, however, on good practices especially for large numbers of individual tasks to run in the future. Finally, you can also control how many times a repeating task is run: schedule = client . schedules . create ( 'MyWorker' , payload , { :start_at => Time . now + 3600 , :run_every => 3600 , :run_times => 24 }) # will be run every hour for one day, starting an hour from now "
}, {
"title": "Turn Key Workers",
"url": "/worker/turn_key_workers/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "This page has been moved here .",
"body": "This page has been moved here . "
}, {
"title": "Turnkey Workers",
"url": "/worker/turnkey/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Turnkey workers are pre-made workers you can add to your account and start using immediately. No coding required! And it doesn't matter which language the workers are written in because you can queue up jobs for these workers from any language using our API. The basic usage for all of these is: iron worker upload http://github.com/PATH TO WORKER FILE Queue",
"body": "Turnkey workers are pre-made workers you can add to your account and start using immediately. No coding required! And it doesn't matter which language the workers are written in because you can queue up jobs for these workers from any language using our API. The basic usage for all of these is: iron worker upload http://github.com/PATH TO WORKER FILE Queue up tasks for the worker or schedule it. That's it! Super easy, super powerful. See this blog post on shareable workers for more background. Turnkey Worker List Hello Worker - a simple example worker to try it out Image Processing Worker - process images at scale using ImageMagick, no servers required! Email Worker - send emails through any smtp provider Twilio Worker - send twilio messages Hipchat Worker - send message to hipchat FFmpeg video processing Worker - process videos at scale using ffmpeg Contributing To add a worker you made to this list, just fork our Dev Center repository , add your worker to this page, then submit a pull request. "
}, {
"title": "IronWorker Webhooks",
"url": "/worker/webhooks/index.html",
"section": null,
"date": null,
"categories": [],
"summary": null,
"short": "Using IronWorker webhooks enables you to run pretty much anything you want whenever an event happens at a third party service that supports webhooks. Table of Contents How to Use IronWorker Webhooks Example Step 1 Step 2 Step 3 How to Use IronWorker Webhooks A Webhook is simply an HTTP POST API endpoint so you don't need any updates in",
"body": "Using IronWorker webhooks enables you to run pretty much anything you want whenever an event happens at a third party service that supports webhooks. Table of Contents How to Use IronWorker Webhooks Example Step 1 Step 2 Step 3 How to Use IronWorker Webhooks A Webhook is simply an HTTP POST API endpoint so you don't need any updates in your existing workers to use them. A typical workflow for a webhook is: Create and upload a worker Obtain webhook link: From the HUD Or by the CLI $ iron_worker webhook $WORKER_NAME Pass webhook link to 3rdparty service like GitHub, or as subscriber URL for IronMQ Push Queue Do something to trigger the webhook, say, commit to GitHub or post a message to a Push Queue. When the IronWorker service receives the HTTP POST request to your webhook endpoint, it will pass the request's body to a worker as a file, specified by -payload option . The URL's scheme for webhooks is: $ WORKER_API_URL/projects/ $PROJECT_ID /tasks/webhook?code_name = $CODE_NAME & oauth = $TOKEN Where: $WORKER_API_URL is https://worker-us-east.iron.io/2 $PROJECT_ID and $TOKEN are credentials to access to your project $CODE_NAME is name of your worker Example The best way to see how this works is via an example. The rest of this section will use a Github to Hipchat webhook where Github will hit the webhook and the worker will post to Hipchat. The full code is here . Step 1: Create a worker and upload it to IronWorker This is the same as you would create and upload a worker normally, the difference is in how the task is queued up. First let's create the worker: --- Now let's upload it: --- Step 2: Add your workers webhook URL to Github service hooks Github service hooks are where you can add webhooks for Github events. In your Github project, click Admin, Service Hooks, then Post-Receive URLs. In the text field, add the webhook url for your worker, it should look something like this: https://worker-us-east.iron.io/2/projects/{Project ID}/tasks/webhook?code_name={Code Name}&oauth={Token} The upload script above will print the exact URL to your console so you can just copy and paste it. Step 3: Commit and push some code to your github project and watch the magic happen! That's it! It will post your github commit information to the Hipchat room you specified in the config file. "
}, {
"title": "Image Processing",
"url": "/solutions/image-processing",
"section": null,
"date": "2012-05-31 00:00:00 -0700",
"categories": ["solutions"],
"summary": null,
"short": "Want to get right to the good stuff? Download the code from Github . Processing images is a key part of most social, retail, and mobile applications. Almost every application that uses photos needs to perform some manipulation on them to get them into a useful format. Let's say you have a lot of images coming into your app in",
"body": " Want to get right to the good stuff? Download the code from Github . Processing images is a key part of most social, retail, and mobile applications. Almost every application that uses photos needs to perform some manipulation on them to get them into a useful format. Let's say you have a lot of images coming into your app in certain sizes and formats. How do you convert them to thumbnails, resize them, apply color filters, or perform other transformations automatically without a lot of effort? It's easy to do these things programmatically using the ImageMagick libraries. These types of jobs are best done in the background—no reason to eat up front-end cycles and make users wait. The issue, then, is setting up the environment and scalable infrastructure to run the processing on—that's where IronWorker comes in. Managing servers to handle intermittent requests is not much fun. And if you convert batches of images at a time, it could take hours running on self-managed infrastructure. With IronWorker and the ImageMagick code libraries, that effort can be scaled-out on demand, without you having to manage a thing. Requirements The examples we'll create here use the following gems. Your specific use case may vary, but these are generally a good starting point. open-uri RMagick aws subexec mini_magick The ImageMagick binary libraries are already installed in the IronWorker system environment . You'll probably need to include client libraries to interface with the binary libraries, however. You can find more information on including gems on the Merging Gems page . Writing the Worker The first step for any worker is to write (and test!) the code on your local machine. IronWorker's environment is simply a Linux sandbox, so any code that runs on your system, assuming you properly package it, should run exactly the same on the IronWorker cloud. Passing Information It's generally best to pass IDs or URLs to images in the payload of the worker, as opposed to the images themselves—workers have a limited data payload size, and there's less opportunity for corruption of data. The worker will then use this unique identifier to retrieve the files to be processed and bring them into your worker's environment. In this, example, an image URL is passed in to the worker as a data param. The file is written to the temporary, private storage each worker is given. You can see the System Environment page for more information about the amount of memory and storage available to each worker. Retrieving the Image Retrieving the image is a simple matter of downloading it over HTTP: image_processor.rb def download_image filename = 'ironman.jpg' filepath = filename File . open ( filepath , 'wb' ) do | fo | fo . write open ( @params [ 'image_url' ] ) . read end filename end Storing the Image After processing the image, you're going to want to store it somewhere else —IronWorker's storage is only temporary, and it's completely wiped after the worker finishes running. This means that you don't have to worry about cleaning up your environment after your task is done, but it also means that you need to store the files somewhere before the worker finishes running. In this example, we're going to upload them to Amazon S3 using the aws gem. Note that @aws_access , @aws_secret , and @aws_s3_bucket will all need to be included in the task's payload. image_processor.rb def upload_file ( filename ) filepath = filename puts \" \\n Uploading the file to s3...\" s3 = Aws :: S3Interface . new ( @params [ 'aws_access' ] , @params [ 'aws_secret' ] ) s3 . create_bucket ( @params [ 'aws_s3_bucket_name' ] ) response = s3 . put ( @params [ 'aws_s3_bucket_name' ] , filename , File . open ( filepath )) if ( response == true ) puts \"Uploading succesful.\" link = s3 . get_link ( @params [ 'aws_s3_bucket_name' ] , filename ) puts \" \\n You can view the file here on s3: \\n \" + link else puts \"Error placing the file in s3.\" end puts \"-\" * 60 end Manipulating the Image Let's create a sample set of functions that process the image in various ways. (ImageMagick is an incredibly comprehensive library, so this is just a small sample of what's possible.) We'll use the following variables in these sample functions: Variable Meaning filename The path to the file you're manipulating. The file needs to be in your worker's environment. width The width, in pixels, of the post-manipulation image. height The height, in pixels, of the post-manipulation image. format The image format to output the post-manipulation image in. Resizing Images image_processor.rb def resize_image ( filename , width = nil , height = nil , format = 'jpg' ) image = MiniMagick :: Image . open ( filename ) original_width , original_height = image [ :width ] , image [ :height ] width ||= original_width height ||= original_height output_filename = \" #{ filename } _thumbnail_ #{ width } _ #{ height } . #{ format } \" image . resize \" #{ width } x #{ height } \" image . format format image . write output_filename output_filename end Generating a Thumbnail image_processor.rb def generate_thumb ( filename , width = nil , height = nil , format = 'jpg' ) output_filename = \" #{ filename } _thumbnail_ #{ width } _ #{ height } . #{ format } \" image = MiniMagick :: Image . open ( filename ) image . combine_options do | c | c . thumbnail \" #{ width } x #{ height } \" c . background 'white' c . extent \" #{ width } x #{ height } \" c . gravity \"center\" end image . format format image . write output_filename output_filename end Making a Sketch of an Image image_processor.rb def sketch_image ( filename , format = 'jpg' ) output_filename = \" #{ filename } _sketch. #{ format } \" image = MiniMagick :: Image . open ( filename ) image . combine_options do | c | c . edge \"1\" c . negate c . normalize c . colorspace \"Gray\" c . blur \"0x.5\" end image . format format image . write output_filename output_filename end Normalizing Image Colors image_processor.rb def normalize_image ( filename , format = 'jpg' ) output_filename = \" #{ filename } _normalized. #{ format } \" image = MiniMagick :: Image . open ( filename ) image . normalize image . format format image . write output_filename output_filename end Putting It All Together We've built all the tools, let's tie them together in a single worker now. image_processor.rb puts \"Downloading image\" filename = download_image () puts \"Generating square thumbnail\" processed_filename = generate_thumb ( filename , 50 , 50 ) upload_file ( processed_filename ) puts \"Generating small picture\" processed_filename = resize_image ( filename , nil , 100 ) upload_file ( processed_filename ) puts \"Generating normal picture\" processed_filename = resize_image ( filename , nil , 200 ) upload_file ( processed_filename ) puts \"Generating picture with tuned levels\" processed_filename = level_image ( filename , 10 , 250 , 1 . 0 ) upload_file ( processed_filename ) puts \"Tune picture\" processed_filename = normalize_image ( filename ) upload_file ( processed_filename ) puts \"Generating sketch from picture\" processed_filename = sketch_image ( filename ) upload_file ( processed_filename ) puts \"Generating charcoal_sketch from picture\" processed_filename = charcoal_sketch_image ( filename ) upload_file ( processed_filename ) puts \"Cutting image to 6 puzzles 3x3\" file_list = tile_image ( filename , 3 , 3 ) puts \"List of images ready to process,merging in one image\" processed_filename = merge_images ( 3 , 3 , file_list ) upload_file ( processed_filename ) Uploading the Worker Uploading the worker is pretty simple. We're going to use the IronWorker command line tool , to make life easier. Save the following as image_processor.worker : image_processor.worker gem 'aws' gem 'subexec' gem 'mini_magick' exec 'image_processor.rb' # Whatever you named the worker script Now to upload the worker, just navigate to the directory with the .worker file and the worker script, and run: Command Line $ iron_worker upload image_processor Processing Images With the Worker To process images with the worker, you just need to queue a task with the necessary parameters (your AWS credentials and the URL for the image). Here's an example from the command line: Command Line $ iron_worker queue ImageProcessor -p '{\"aws_access\": \"AWS ACCESS KEY\", \"aws_secret\": \"AWS SECRET KEY\", \"aws_s3_bucket_name\": \"AWS BUCKET NAME\", \"image_url\": \"http://dev.iron.io/images/iron_pony.png\"}' You can also queue tasks from within your application: image_processor.rb client . tasks . create ( 'ImageProcessor' , :aws_access => \"AWS ACCESS KEY\" , :aws_secret => \"AWS SECRET KEY\" , :aws_s3_bucket_name => \"AWS BUCKET NAME\" , :image_url => \"http://dev.iron.io/images/iron_pony.png\" , ) On Github You can find all the code for this example worker on Github . Feel free to copy, edit, and run it on IronWorker! :) Next Steps Any article on ImageMagick will necessarily omit a lot of the power that the library provides—there are just too many options and commands. If you're interested in doing more with ImageMagick, check out the official documentation on the ImageMagick website for a much more in-depth look at the possibilities. For those using ImageMagick from Ruby, we recommend the MiniMagick gem —it's a wrapper for the command line utility that uses less memory than the RMagick gem. "
}, {
"title": "Sending Email & Notifications",
"url": "/solutions/notifications",
"section": null,
"date": "2012-06-13 00:00:00 -0700",
"categories": ["solutions"],
"summary": null,
"short": "Sending notifications is a required part of almost any application or service. Whether it's sending email verification emails, texting users, sending out a newsletter, emailing usage data, or even a more complicated use case, it's important for you to keep in communication with your users. This communication never really needs to block requests, however. Notifications are asynchronous by nature, which",
"body": "Sending notifications is a required part of almost any application or service. Whether it's sending email verification emails, texting users, sending out a newsletter, emailing usage data, or even a more complicated use case, it's important for you to keep in communication with your users. This communication never really needs to block requests, however. Notifications are asynchronous by nature, which makes them a perfect match for Iron.io's services. As your application grows, your notification system needs to scale with your user base and usage. This, again, is something that the elastic, on-demand, massively scalable Iron.io architecture supports out of the box. Basics Notification workers generally follow the same three-step process: Create Your Workers . Create different workers to handle a variety of emails and notifications—alerts, daily summaries, weekly updates, personalized offers, special notices, and more. Choose Your Delivery Gateway . Use an SMTP gateway like SendGrid or an API like Android C2DM and Twilio to manage the actual sending, monitoring, and analysis of the delivery step. Process and Send Notifications in Parallel . Use IronWorker to handle the processing and interface with the gateway. Queue up thousands of jobs at once or use scheduled jobs to send messages at set times. The Worker The worker can also be split up into three major steps: initializing the notification headers, preparing and sending the notification, and signaling exceptions and recording the status. For a detailed example using SendGrid, IronWorker, and ActionMailer, check out our blog post . Preparing the Headers Based on your gateway, your language, and your library, this step may be trivial. It consists largely of configuring the sender, the subject, and other information that is common to all the notifications. Preparing the Notification This will again depend on your specific implementation, but this will almost always consist of a loop through the users you want to notify. If the notifications are customized on a per-user basis, this is when the message would be generated. Finally, the worker sends the mail or notification. Signaling Exceptions & Recording Status This step is an important one if stability and logging are important to your notifications. \"Signaling Exceptions\" simply means notifying your application when something goes wrong--this can be as simple as a callback to an HTTP request endpoint, pushing a message to IronMQ, or flagging a notification in the database. However you want to do it, you should implement a way to trigger retries on notifications. Scheduled workers can help in this: simply schedule a worker to run every hour or every day and retry emails or notifications that threw errors or failed. If a messge fails a certain number of times, bring it to the attention of your team, as it probably indicates a bug in your worker. Recording status is important for providing an audit log. It's often important to know that, e.g., the user was warned about their overdue status. You should log that the notification or email was successfully sent, along with the timestamp. Sending in Parallel Notifications and emails can often need to be sent in a timely fashion; users are often not impressed with 9 hour delays between an event and receiving a notification of it. As your usage and user base grow, a single task that processes notifications one at a time will quickly become inadequate. As with the transformation of a 9-hour job to a 9-minute job , the solution to this lies in massive parallelisation. By queuing tens, hundreds, or thousands of tasks to manage your queue, you can process a staggering amount of notifications and emails in a brief time period. Many hands makes light work. Workers do have a setup time, and sending a notification is a pretty quick action. To try to make the most of the setup time, we usually recommend that tasks run for at least several minutes. The most straight-forward architecture, queuing a task for each notification, will work—it's just not the most efficient method available. A more elegant model would be to batch notifications into tens or hundreds, then queue that batch, instead of all tasks or just one . Using IronMQ to Guarantee Delivery IronMQ uses a get-delete paradigm that keeps messages on the queue until they are explicitly deleted, but reserves them for short periods of time for clients to prevent duplicate handling. This architecture makes it really easy to implement messages that will automatically retry. As long as a message is not removed from the queue until after the worker sends it, any error that causes the worker to fail or sending to fail will result in the message being returned to the queue to be tried again, without any intervention or error-handling on your part. Furthermore, IronMQ can be used for tightly controlled parallelisation. Assuming messages are queued up, workers can be spun up to consume the queue until it is empty. This allows you to spin up as many workers as you want, working in parallel with no modification to your code or batching. You can avoid overloading an API or database with thousands of simultaneous requests through this tight control over the number of running workers. More Info Need more help? Stop by the chat room and our engineers will help you architect a solution that fits your needs. "
}]
}