diff options
Diffstat (limited to 'docs/source')
-rw-r--r-- | docs/source/autoscale_tut.rst | 11 | ||||
-rw-r--r-- | docs/source/boto_config_tut.rst | 288 | ||||
-rw-r--r-- | docs/source/cloudsearch_tut.rst | 141 | ||||
-rw-r--r-- | docs/source/cloudwatch_tut.rst | 6 | ||||
-rw-r--r-- | docs/source/dynamodb_tut.rst | 679 | ||||
-rw-r--r-- | docs/source/ec2_tut.rst | 118 | ||||
-rw-r--r-- | docs/source/elb_tut.rst | 57 | ||||
-rw-r--r-- | docs/source/emr_tut.rst | 19 | ||||
-rw-r--r-- | docs/source/getting_started.rst | 177 | ||||
-rw-r--r-- | docs/source/index.rst | 19 | ||||
-rw-r--r-- | docs/source/rds_tut.rst | 108 | ||||
-rw-r--r-- | docs/source/ref/cloudsearch.rst | 2 | ||||
-rw-r--r-- | docs/source/ref/dynamodb2.rst | 26 | ||||
-rw-r--r-- | docs/source/ref/index.rst | 1 | ||||
-rw-r--r-- | docs/source/ref/redshift.rst | 26 | ||||
-rw-r--r-- | docs/source/s3_tut.rst | 330 | ||||
-rw-r--r-- | docs/source/ses_tut.rst | 11 | ||||
-rw-r--r-- | docs/source/simpledb_tut.rst | 7 | ||||
-rw-r--r-- | docs/source/sqs_tut.rst | 8 | ||||
-rw-r--r-- | docs/source/vpc_tut.rst | 11 |
20 files changed, 1420 insertions, 625 deletions
diff --git a/docs/source/autoscale_tut.rst b/docs/source/autoscale_tut.rst index 1f03ec05..1c3a0a18 100644 --- a/docs/source/autoscale_tut.rst +++ b/docs/source/autoscale_tut.rst @@ -32,9 +32,6 @@ There are two ways to do this in boto. The first is: >>> from boto.ec2.autoscale import AutoScaleConnection >>> conn = AutoScaleConnection('<aws access key>', '<aws secret key>') -Alternatively, you can use the shortcut: - ->>> conn = boto.connect_autoscale() A Note About Regions and Endpoints ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -43,7 +40,7 @@ default the US endpoint is used. To choose a specific region, instantiate the AutoScaleConnection object with that region's endpoint. >>> import boto.ec2.autoscale ->>> ec2 = boto.ec2.autoscale.connect_to_region('eu-west-1') +>>> autoscale = boto.ec2.autoscale.connect_to_region('eu-west-1') Alternatively, edit your boto.cfg with the default Autoscale endpoint to use:: @@ -163,7 +160,8 @@ will now be a property of our ScalingPolicy objects. Next we'll create CloudWatch alarms that will define when to run the Auto Scaling Policies. ->>> cloudwatch = boto.connect_cloudwatch() +>>> import boto.ec2.cloudwatch +>>> cloudwatch = boto.ec2.cloudwatch.connect_to_region('us-west-2') It makes sense to measure the average CPU usage across the whole Auto Scaling Group, rather than individual instances. We express that as CloudWatch @@ -199,7 +197,8 @@ beyond the limits of the Scaling Group's 'max_size' and 'min_size' properties. To retrieve the instances in your autoscale group: ->>> ec2 = boto.connect_ec2() +>>> import boto.ec2 +>>> ec2 = boto.ec2.connect_to_region('us-west-2) >>> conn.get_all_groups(names=['my_group'])[0] >>> instance_ids = [i.instance_id for i in group.instances] >>> reservations = ec2.get_all_instances(instance_ids) diff --git a/docs/source/boto_config_tut.rst b/docs/source/boto_config_tut.rst index c134397c..dc8000e7 100644 --- a/docs/source/boto_config_tut.rst +++ b/docs/source/boto_config_tut.rst @@ -11,8 +11,8 @@ There is a growing list of configuration options for the boto library. Many of these options can be passed into the constructors for top-level objects such as connections. Some options, such as credentials, can also be read from environment variables (e.g. ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``). -But there is no central place to manage these options. So, the development -version of boto has now introduced the notion of boto config files. +It is also possible to manage these options in a central place through the use +of boto config files. Details ------- @@ -33,6 +33,13 @@ methods of that object. In addition, the boto :py:class:`Config <boto.pyami.config.Config>` class defines additional methods that are described on the PyamiConfigMethods page. +An example ``~/.boto`` file should look like:: + + [Credentials] + aws_access_key_id = <your_access_key_here> + aws_secret_access_key = <your_secret_key_here> + + Sections -------- @@ -50,7 +57,7 @@ boto requests. The order of precedence for authentication credentials is: * Credentials specified as options in the config file. This section defines the following options: ``aws_access_key_id`` and -``aws_secret_access_key``. The former being your aws key id and the latter +``aws_secret_access_key``. The former being your AWS key id and the latter being the secret key. For example:: @@ -60,7 +67,7 @@ For example:: aws_secret_access_key = <your secret key> Please notice that quote characters are not used to either side of the '=' -operator even when both your aws access key id and secret key are strings. +operator even when both your AWS access key id and secret key are strings. For greater security, the secret key can be stored in a keyring and retrieved via the keyring package. To use a keyring, use ``keyring``, @@ -76,11 +83,22 @@ Python path. To learn about setting up keyrings, see the `keyring documentation <http://pypi.python.org/pypi/keyring#installing-and-using-python-keyring-lib>`_ +Credentials can also be supplied for a Eucalyptus service:: + + [Credentials] + euca_access_key_id = <your access key> + euca_secret_access_key = <your secret key> + +Finally, this section is also be used to provide credentials for the Internet Archive API:: + + [Credentials] + ia_access_key_id = <your access key> + ia_secret_access_key = <your secret key> Boto ^^^^ -The Boto section is used to specify options that control the operaton of +The Boto section is used to specify options that control the operation of boto itself. This section defines the following options: :debug: Controls the level of debug messages that will be printed by the boto library. @@ -99,7 +117,7 @@ boto itself. This section defines the following options: request. The default number of retries is 5 but you can change the default with this option. -As an example:: +For example:: [Boto] debug = 0 @@ -110,6 +128,152 @@ As an example:: proxy_user = foo proxy_pass = bar + +:connection_stale_duration: Amount of time to wait in seconds before a + connection will stop getting reused. AWS will disconnect connections which + have been idle for 180 seconds. +:is_secure: Is the connection over SSL. This setting will overide passed in + values. +:https_validate_certificates: Validate HTTPS certificates. This is on by default +:ca_certificates_file: Location of CA certificates +:http_socket_timeout: Timeout used to overwrite the system default socket + timeout for httplib . +:send_crlf_after_proxy_auth_headers: Change line ending behaviour with proxies. + For more details see this `discussion <https://groups.google.com/forum/?fromgroups=#!topic/boto-dev/teenFvOq2Cc>`_ + +These settings will default to:: + + [Boto] + connection_stale_duration = 180 + is_secure = True + https_validate_certificates = True + ca_certificates_file = cacerts.txt + http_socket_timeout = 60 + send_crlf_after_proxy_auth_headers = False + +You can control the timeouts and number of retries used when retrieving +information from the Metadata Service (this is used for retrieving credentials +for IAM roles on EC2 instances): + +:metadata_service_timeout: Number of seconds until requests to the metadata + service will timeout (float). +:metadata_service_num_attempts: Number of times to attempt to retrieve + information from the metadata service before giving up (int). + +These settings will default to:: + + [Boto] + metadata_service_timeout = 1.0 + metadata_service_num_attempts = 1 + + +This section is also used for specifying endpoints for non-AWS services such as +Eucalyptus and Walrus. + +:eucalyptus_host: Select a default endpoint host for eucalyptus +:walrus_host: Select a default host for Walrus + +For example:: + + [Boto] + eucalyptus_host = somehost.example.com + walrus_host = somehost.example.com + + +Finally, the Boto section is used to set defaults versions for many AWS services + +AutoScale settings: + +options: +:autoscale_version: Set the API version +:autoscale_endpoint: Endpoint to use +:autoscale_region_name: Default region to use + +For example:: + + [Boto] + autoscale_version = 2011-01-01 + autoscale_endpoint = autoscaling.us-west-2.amazonaws.com + autoscale_region_name = us-west-2 + + +Cloudformation settings can also be defined: + +:cfn_version: Cloud formation API version +:cfn_region_name: Default region name +:cfn_region_endpoint: Default endpoint + +For example:: + + [Boto] + cfn_version = 2010-05-15 + cfn_region_name = us-west-2 + cfn_region_endpoint = cloudformation.us-west-2.amazonaws.com + +Cloudsearch settings: + +:cs_region_name: Default cloudsearch region +:cs_region_endpoint: Default cloudsearch endpoint + +For example:: + + [Boto] + cs_region_name = us-west-2 + cs_region_endpoint = cloudsearch.us-west-2.amazonaws.com + +Cloudwatch settings: + +:cloudwatch_version: Cloudwatch API version +:cloudwatch_region_name: Default region name +:cloudwatch_region_endpoint: Default endpoint + +For example:: + + [Boto] + cloudwatch_version = 2010-08-01 + cloudwatch_region_name = us-west-2 + cloudwatch_region_endpoint = monitoring.us-west-2.amazonaws.com + +EC2 settings: + +:ec2_version: EC2 API version +:ec2_region_name: Default region name +:ec2_region_endpoint: Default endpoint + +For example:: + + [Boto] + ec2_version = 2012-12-01 + ec2_region_name = us-west-2 + ec2_region_endpoint = ec2.us-west-2.amazonaws.com + +ELB settings: + +:elb_version: ELB API version +:elb_region_name: Default region name +:elb_region_endpoint: Default endpoint + +For example:: + + [Boto] + elb_version = 2012-06-01 + elb_region_name = us-west-2 + elb_region_endpoint = elasticloadbalancing.us-west-2.amazonaws.com + +EMR settings: + +:emr_version: EMR API version +:emr_region_name: Default region name +:emr_region_endpoint: Default endpoint + +For example:: + + [Boto] + emr_version = 2009-03-31 + emr_region_name = us-west-2 + emr_region_endpoint = elasticmapreduce.us-west-2.amazonaws.com + + Precedence ---------- @@ -117,9 +281,119 @@ Even if you have your boto config setup, you can also have credentials and options stored in environmental variables or you can explicitly pass them to method calls i.e.:: - >>> boto.connect_ec2('<KEY_ID>','<SECRET_KEY>') + >>> boto.ec2.connect_to_region( + ... 'us-west-2', + ... aws_access_key_id='foo', + ... aws_secret_access_key='bar') In these cases where these options can be found in more than one place boto will first use the explicitly supplied arguments, if none found it will then look for them amidst environment variables and if that fails it will use the ones in boto config. + +Notification +^^^^^^^^^^^^ + +If you are using notifications for boto.pyami, you can specify the email +details through the following variables. + +:smtp_from: Used as the sender in notification emails. +:smtp_to: Destination to which emails should be sent +:smtp_host: Host to connect to when sending notification emails. +:smtp_port: Port to connect to when connecting to the :smtp_host: + +Default values are:: + + [notification] + smtp_from = boto + smtp_to = None + smtp_host = localhost + smtp_port = 25 + smtp_tls = True + smtp_user = john + smtp_pass = hunter2 + +SWF +^^^ + +The SWF section allows you to configure the default region to be used for the +Amazon Simple Workflow service. + +:region: Set the default region + +Example:: + + [SWF] + region = us-west-2 + +Pyami +^^^^^ + +The Pyami section is used to configure the working directory for PyAMI. + +:working_dir: Working directory used by PyAMI + +Example:: + + [Pyami] + working_dir = /home/foo/ + +DB +^^ +The DB section is used to configure access to databases through the +:func:`boto.sdb.db.manager.get_manager` function. + +:db_type: Type of the database. Current allowed values are `SimpleDB` and + `XML`. +:db_user: AWS access key id. +:db_passwd: AWS secret access key. +:db_name: Database that will be connected to. +:db_table: Table name :note: This doesn't appear to be used. +:db_host: Host to connect to +:db_port: Port to connect to +:enable_ssl: Use SSL + +More examples:: + + [DB] + db_type = SimpleDB + db_user = <aws access key id> + db_passwd = <aws secret access key> + db_name = my_domain + db_table = table + db_host = sdb.amazonaws.com + enable_ssl = True + debug = True + + [DB_TestBasic] + db_type = SimpleDB + db_user = <another aws access key id> + db_passwd = <another aws secret access key> + db_name = basic_domain + db_port = 1111 + +SDB +^^^ + +This section is used to configure SimpleDB + +:region: Set the region to which SDB should connect + +Example:: + + [SDB] + region = us-west-2 + +DynamoDB +^^^^^^^^ + +This section is used to configure DynamoDB + +:region: Choose the default region +:validate_checksums: Check checksums returned by DynamoDB + +Example:: + + [DynamoDB] + region = us-west-2 + validate_checksums = True diff --git a/docs/source/cloudsearch_tut.rst b/docs/source/cloudsearch_tut.rst index 7172a47d..f29bccad 100644 --- a/docs/source/cloudsearch_tut.rst +++ b/docs/source/cloudsearch_tut.rst @@ -16,10 +16,12 @@ The first step in accessing CloudSearch is to create a connection to the service The recommended method of doing this is as follows:: >>> import boto.cloudsearch - >>> conn = boto.cloudsearch.connect_to_region("us-east-1", aws_access_key_id= '<aws access key'>, aws_secret_access_key='<aws secret key>') + >>> conn = boto.cloudsearch.connect_to_region("us-west-2", + ... aws_access_key_id='<aws access key'>, + ... aws_secret_access_key='<aws secret key>') At this point, the variable conn will point to a CloudSearch connection object -in the us-east-1 region. Currently, this is the only region which has the +in the us-west-2 region. Currently, this is the only region which has the CloudSearch service. In this example, the AWS access key and AWS secret key are passed in to the method explicitly. Alternatively, you can set the environment variables: @@ -30,7 +32,7 @@ variables: and then simply call:: >>> import boto.cloudsearch - >>> conn = boto.cloudsearch.connect_to_region("us-east-1") + >>> conn = boto.cloudsearch.connect_to_region("us-west-2") In either case, conn will point to the Connection object which we will use throughout the remainder of this tutorial. @@ -40,7 +42,7 @@ Creating a Domain Once you have a connection established with the CloudSearch service, you will want to create a domain. A domain encapsulates the data that you wish to index, -as well as indexes and metadata relating to it. +as well as indexes and metadata relating to it:: >>> from boto.cloudsearch.domain import Domain >>> domain = Domain(conn, conn.create_domain('demo')) @@ -51,8 +53,9 @@ document service, which you will use to index and search. Setting access policies ----------------------- -Before you can connect to a document service, you need to set the correct access properties. -For example, if you were connecting from 192.168.1.0, you could give yourself access as follows: +Before you can connect to a document service, you need to set the correct +access properties. For example, if you were connecting from 192.168.1.0, you +could give yourself access as follows:: >>> our_ip = '192.168.1.0' @@ -61,50 +64,57 @@ For example, if you were connecting from 192.168.1.0, you could give yourself ac >>> policy.allow_search_ip(our_ip) >>> policy.allow_doc_ip(our_ip) -You can use the allow_search_ip() and allow_doc_ip() methods to give different -CIDR blocks access to searching and the document service respectively. +You can use the :py:meth:`allow_search_ip +<boto.cloudsearch.optionstatus.ServicePoliciesStatus.allow_search_ip>` and +:py:meth:`allow_doc_ip <boto.cloudsearch.optionstatus.ServicePoliciesStatus.allow_doc_ip>` +methods to give different CIDR blocks access to searching and the document +service respectively. Creating index fields --------------------- Each domain can have up to twenty index fields which are indexed by the CloudSearch service. For each index field, you will need to specify whether -it's a text or integer field, as well as optionaly a default value. +it's a text or integer field, as well as optionaly a default value:: >>> # Create an 'text' index field called 'username' >>> uname_field = domain.create_index_field('username', 'text') >>> # Epoch time of when the user last did something - >>> time_field = domain.create_index_field('last_activity', 'uint', default=0) + >>> time_field = domain.create_index_field('last_activity', + ... 'uint', + ... default=0) It is also possible to mark an index field as a facet. Doing so allows a search query to return categories into which results can be grouped, or to create -drill-down categories - - >>> # But it would be neat to drill down into different countries +drill-down categories:: + + >>> # But it would be neat to drill down into different countries >>> loc_field = domain.create_index_field('location', 'text', facet=True) Finally, you can also mark a snippet of text as being able to be returned -directly in your search query by using the results option. +directly in your search query by using the results option:: >>> # Directly insert user snippets in our results >>> snippet_field = domain.create_index_field('snippet', 'text', result=True) -You can add up to 20 index fields in this manner: +You can add up to 20 index fields in this manner:: - >>> follower_field = domain.create_index_field('follower_count', 'uint', default=0) + >>> follower_field = domain.create_index_field('follower_count', + ... 'uint', + ... default=0) Adding Documents to the Index ----------------------------- Now, we can add some documents to our new search domain. First, you will need a -document service object through which queries are sent: +document service object through which queries are sent:: >>> doc_service = domain.get_document_service() For this example, we will use a pre-populated list of sample content for our import. You would normally pull such data from your database or another -document store. +document store:: >>> users = [ { @@ -142,27 +152,30 @@ document store. ] When adding documents to our document service, we will batch them together. You -can schedule a document to be added by using the add() method. Whenever you are -adding a document, you must provide a unique ID, a version ID, and the actual -document to be indexed. In this case, we are using the user ID as our unique -ID. The version ID is used to determine which is the latest version of an -object to be indexed. If you wish to update a document, you must use a higher -version ID. In this case, we are using the time of the user's last activity as -a version number. +can schedule a document to be added by using the :py:meth:`add +<boto.cloudsearch.document.DocumentServiceConnection.add>` method. Whenever you are adding a +document, you must provide a unique ID, a version ID, and the actual document +to be indexed. In this case, we are using the user ID as our unique ID. The +version ID is used to determine which is the latest version of an object to be +indexed. If you wish to update a document, you must use a higher version ID. In +this case, we are using the time of the user's last activity as a version +number:: >>> for user in users: >>> doc_service.add(user['id'], user['last_activity'], user) When you are ready to send the batched request to the document service, you can -do with the commit() method. Note that cloudsearch will charge per 1000 batch -uploads. Each batch upload must be under 5MB. +do with the :py:meth:`commit +<boto.cloudsearch.document.DocumentServiceConnection.commit>` method. Note that +cloudsearch will charge per 1000 batch uploads. Each batch upload must be under +5MB:: - >>> result = doc_service.commit() + >>> result = doc_service.commit() -The result is an instance of `cloudsearch.CommitResponse` which will -make the plain dictionary response a nice object (ie result.adds, -result.deletes) and raise an exception for us if all of our documents -weren't actually committed. +The result is an instance of :py:class:`CommitResponse +<boto.cloudsearch.document.CommitResponse>` which will make the plain +dictionary response a nice object (ie result.adds, result.deletes) and raise an +exception for us if all of our documents weren't actually committed. After you have successfully committed some documents to cloudsearch, you must use :py:meth:`clear_sdf @@ -173,12 +186,13 @@ cleared. Searching Documents ------------------- -Now, let's try performing a search. First, we will need a SearchServiceConnection: +Now, let's try performing a search. First, we will need a +SearchServiceConnection:: >>> search_service = domain.get_search_service() A standard search will return documents which contain the exact words being -searched for. +searched for:: >>> results = search_service.search(q="dan") >>> results.hits @@ -186,7 +200,7 @@ searched for. >>> map(lambda x: x['id'], results) [u'1', u'4'] -The standard search does not look at word order: +The standard search does not look at word order:: >>> results = search_service.search(q="dinosaur dress") >>> results.hits @@ -196,7 +210,7 @@ The standard search does not look at word order: It's also possible to do more complex queries using the bq argument (Boolean Query). When you are using bq, your search terms must be enclosed in single -quotes. +quotes:: >>> results = search_service.search(bq="'dan'") >>> results.hits @@ -205,7 +219,7 @@ quotes. [u'1', u'4'] When you are using boolean queries, it's also possible to use wildcards to -extend your search to all words which start with your search terms: +extend your search to all words which start with your search terms:: >>> results = search_service.search(bq="'dan*'") >>> results.hits @@ -215,7 +229,7 @@ extend your search to all words which start with your search terms: The boolean query also allows you to create more complex queries. You can OR term together using "|", AND terms together using "+" or a space, and you can -remove words from the query using the "-" operator. +remove words from the query using the "-" operator:: >>> results = search_service.search(bq="'watched|moved'") >>> results.hits @@ -224,7 +238,7 @@ remove words from the query using the "-" operator. [u'3', u'4'] By default, the search will return 10 terms but it is possible to adjust this -by using the size argument as follows: +by using the size argument as follows:: >>> results = search_service.search(bq="'dan*'", size=2) >>> results.hits @@ -232,7 +246,8 @@ by using the size argument as follows: >>> map(lambda x: x['id'], results) [u'1', u'2'] -It is also possible to offset the start of the search by using the start argument as follows: +It is also possible to offset the start of the search by using the start +argument as follows:: >>> results = search_service.search(bq="'dan*'", start=2) >>> results.hits @@ -244,18 +259,20 @@ It is also possible to offset the start of the search by using the start argumen Ordering search results and rank expressions -------------------------------------------- -If your search query is going to return many results, it is good to be able to sort them -You can order your search results by using the rank argument. You are able to -sort on any fields which have the results option turned on. +If your search query is going to return many results, it is good to be able to +sort them. You can order your search results by using the rank argument. You are +able to sort on any fields which have the results option turned on:: >>> results = search_service.search(bq=query, rank=['-follower_count']) You can also create your own rank expressions to sort your results according to -other criteria: +other criteria, such as showing most recently active user, or combining the +recency score with the text_relevance:: + + >>> domain.create_rank_expression('recently_active', 'last_activity') - >>> domain.create_rank_expression('recently_active', 'last_activity') # We'll want to be able to just show the most recently active users - - >>> domain.create_rank_expression('activish', 'text_relevance + ((follower_count/(time() - last_activity))*1000)') # Let's get trickier and combine text relevance with a really dynamic expression + >>> domain.create_rank_expression('activish', + ... 'text_relevance + ((follower_count/(time() - last_activity))*1000)') >>> results = search_service.search(bq=query, rank=['-recently_active']) @@ -273,7 +290,7 @@ you map the term running to the stem run and then search for running, the request matches documents that contain run as well as running. To get the current stemming dictionary defined for a domain, use the -``get_stemming`` method of the Domain object. +:py:meth:`get_stemming <boto.cloudsearch.domain.Domain.get_stemming>` method:: >>> stems = domain.get_stemming() >>> stems @@ -282,7 +299,7 @@ To get the current stemming dictionary defined for a domain, use the This returns a dictionary object that can be manipulated directly to add additional stems for your search domain by adding pairs of term:stem -to the stems dictionary. +to the stems dictionary:: >>> stems['stems']['running'] = 'run' >>> stems['stems']['ran'] = 'run' @@ -291,12 +308,12 @@ to the stems dictionary. >>> This has changed the value locally. To update the information in -Amazon CloudSearch, you need to save the data. +Amazon CloudSearch, you need to save the data:: >>> stems.save() You can also access certain CloudSearch-specific attributes related to -the stemming dictionary defined for your domain. +the stemming dictionary defined for your domain:: >>> stems.status u'RequiresIndexDocuments' @@ -321,7 +338,7 @@ so common that including them would result in a massive number of matches. To view the stopwords currently defined for your domain, use the -``get_stopwords`` method of the Domain object. +:py:meth:`get_stopwords <boto.cloudsearch.domain.Domain.get_stopwords>` method:: >>> stopwords = domain.get_stopwords() >>> stopwords @@ -344,17 +361,18 @@ To view the stopwords currently defined for your domain, use the u'the', u'to', u'was']} - >>> + >>> You can add additional stopwords by simply appending the values to the -list. +list:: >>> stopwords['stopwords'].append('foo') >>> stopwords['stopwords'].append('bar') >>> stopwords Similarly, you could remove currently defined stopwords from the list. -To save the changes, use the ``save`` method. +To save the changes, use the :py:meth:`save +<boto.cloudsearch.optionstatus.OptionStatus.save>` method:: >>> stopwords.save() @@ -371,13 +389,13 @@ the indexed term, the results will include documents that contain the indexed term. If you want two terms to match the same documents, you must define -them as synonyms of each other. For example: +them as synonyms of each other. For example:: cat, feline feline, cat To view the synonyms currently defined for your domain, use the -``get_synonyms`` method of the Domain object. +:py:meth:`get_synonyms <boto.cloudsearch.domain.Domain.get_synonyms>` method:: >>> synonyms = domain.get_synonyms() >>> synonyms @@ -385,12 +403,13 @@ To view the synonyms currently defined for your domain, use the >>> You can define new synonyms by adding new term:synonyms entries to the -synonyms dictionary object. +synonyms dictionary object:: >>> synonyms['synonyms']['cat'] = ['feline', 'kitten'] >>> synonyms['synonyms']['dog'] = ['canine', 'puppy'] -To save the changes, use the ``save`` method. +To save the changes, use the :py:meth:`save +<boto.cloudsearch.optionstatus.OptionStatus.save>` method:: >>> synonyms.save() @@ -400,12 +419,14 @@ that provide additional information about the stopwords in your domain. Deleting Documents ------------------ +It is also possible to delete documents:: + >>> import time >>> from datetime import datetime >>> doc_service = domain.get_document_service() >>> # Again we'll cheat and use the current epoch time as our version number - + >>> doc_service.delete(4, int(time.mktime(datetime.utcnow().timetuple()))) >>> service.commit() diff --git a/docs/source/cloudwatch_tut.rst b/docs/source/cloudwatch_tut.rst index 5639c043..c9302092 100644 --- a/docs/source/cloudwatch_tut.rst +++ b/docs/source/cloudwatch_tut.rst @@ -12,8 +12,8 @@ EC2Connection object or call the monitor method on the Instance object. It takes a while for the monitoring data to start accumulating but once
it does, you can do this::
- >>> import boto
- >>> c = boto.connect_cloudwatch()
+ >>> import boto.ec2.cloudwatch
+ >>> c = boto.ec2.cloudwatch.connect_to_region('us-west-2')
>>> metrics = c.list_metrics()
>>> metrics
[Metric:NetworkIn,
@@ -113,4 +113,4 @@ about that particular data point.:: u'Timestamp': u'2009-05-21T19:55:00Z',
u'Unit': u'Percent'}
-My server obviously isn't very busy right now!
\ No newline at end of file +My server obviously isn't very busy right now!
diff --git a/docs/source/dynamodb_tut.rst b/docs/source/dynamodb_tut.rst index 07f06083..0e6a81a1 100644 --- a/docs/source/dynamodb_tut.rst +++ b/docs/source/dynamodb_tut.rst @@ -1,339 +1,340 @@ -.. dynamodb_tut:
-
-============================================
-An Introduction to boto's DynamoDB interface
-============================================
-
-This tutorial focuses on the boto interface to AWS' DynamoDB_. This tutorial
-assumes that you have boto already downloaded and installed.
-
-.. _DynamoDB: http://aws.amazon.com/dynamodb/
-
-
-Creating a Connection
----------------------
-
-The first step in accessing DynamoDB is to create a connection to the service.
-To do so, the most straight forward way is the following::
-
- >>> import boto
- >>> conn = boto.connect_dynamodb(
- aws_access_key_id='<YOUR_AWS_KEY_ID>',
- aws_secret_access_key='<YOUR_AWS_SECRET_KEY>')
- >>> conn
- <boto.dynamodb.layer2.Layer2 object at 0x3fb3090>
-
-Bear in mind that if you have your credentials in boto config in your home
-directory, the two keyword arguments in the call above are not needed. More
-details on configuration can be found in :doc:`boto_config_tut`.
-
-The :py:func:`boto.connect_dynamodb` functions returns a
-:py:class:`boto.dynamodb.layer2.Layer2` instance, which is a high-level API
-for working with DynamoDB. Layer2 is a set of abstractions that sit atop
-the lower level :py:class:`boto.dynamodb.layer1.Layer1` API, which closely
-mirrors the Amazon DynamoDB API. For the purpose of this tutorial, we'll
-just be covering Layer2.
-
-
-Listing Tables
---------------
-
-Now that we have a DynamoDB connection object, we can then query for a list of
-existing tables in that region::
-
- >>> conn.list_tables()
- ['test-table', 'another-table']
-
-
-Creating Tables
----------------
-
-DynamoDB tables are created with the
-:py:meth:`Layer2.create_table <boto.dynamodb.layer2.Layer2.create_table>`
-method. While DynamoDB's items (a rough equivalent to a relational DB's row)
-don't have a fixed schema, you do need to create a schema for the table's
-hash key element, and the optional range key element. This is explained in
-greater detail in DynamoDB's `Data Model`_ documentation.
-
-We'll start by defining a schema that has a hash key and a range key that
-are both keys::
-
- >>> message_table_schema = conn.create_schema(
- hash_key_name='forum_name',
- hash_key_proto_value=str,
- range_key_name='subject',
- range_key_proto_value=str
- )
-
-The next few things to determine are table name and read/write throughput. We'll
-defer explaining throughput to the DynamoDB's `Provisioned Throughput`_ docs.
-
-We're now ready to create the table::
-
- >>> table = conn.create_table(
- name='messages',
- schema=message_table_schema,
- read_units=10,
- write_units=10
- )
- >>> table
- Table(messages)
-
-This returns a :py:class:`boto.dynamodb.table.Table` instance, which provides
-simple ways to create (put), update, and delete items.
-
-
-Getting a Table
----------------
-
-To retrieve an existing table, use
-:py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`::
-
- >>> conn.list_tables()
- ['test-table', 'another-table', 'messages']
- >>> table = conn.get_table('messages')
- >>> table
- Table(messages)
-
-:py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`, like
-:py:meth:`Layer2.create_table <boto.dynamodb.layer2.Layer2.create_table>`,
-returns a :py:class:`boto.dynamodb.table.Table` instance.
-
-Keep in mind that :py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`
-will make an API call to retrieve various attributes of the table including the
-creation time, the read and write capacity, and the table schema. If you
-already know the schema, you can save an API call and create a
-:py:class:`boto.dynamodb.table.Table` object without making any calls to
-Amazon DynamoDB::
-
- >>> table = conn.table_from_schema(
- name='messages',
- schema=message_table_schema)
-
-If you do this, the following fields will have ``None`` values:
-
- * create_time
- * status
- * read_units
- * write_units
-
-In addition, the ``item_count`` and ``size_bytes`` will be 0.
-If you create a table object directly from a schema object and
-decide later that you need to retrieve any of these additional
-attributes, you can use the
-:py:meth:`Table.refresh <boto.dynamodb.table.Table.refresh>` method::
-
- >>> from boto.dynamodb.schema import Schema
- >>> table = conn.table_from_schema(
- name='messages',
- schema=Schema.create(hash_key=('forum_name', 'S'),
- range_key=('subject', 'S')))
- >>> print table.write_units
- None
- >>> # Now we decide we need to know the write_units:
- >>> table.refresh()
- >>> print table.write_units
- 10
-
-
-The recommended best practice is to retrieve a table object once and
-use that object for the duration of your application. So, for example,
-instead of this::
-
- class Application(object):
- def __init__(self, layer2):
- self._layer2 = layer2
-
- def retrieve_item(self, table_name, key):
- return self._layer2.get_table(table_name).get_item(key)
-
-You can do something like this instead::
-
- class Application(object):
- def __init__(self, layer2):
- self._layer2 = layer2
- self._tables_by_name = {}
-
- def retrieve_item(self, table_name, key):
- table = self._tables_by_name.get(table_name)
- if table is None:
- table = self._layer2.get_table(table_name)
- self._tables_by_name[table_name] = table
- return table.get_item(key)
-
-
-Describing Tables
------------------
-
-To get a complete description of a table, use
-:py:meth:`Layer2.describe_table <boto.dynamodb.layer2.Layer2.describe_table>`::
-
- >>> conn.list_tables()
- ['test-table', 'another-table', 'messages']
- >>> conn.describe_table('messages')
- {
- 'Table': {
- 'CreationDateTime': 1327117581.624,
- 'ItemCount': 0,
- 'KeySchema': {
- 'HashKeyElement': {
- 'AttributeName': 'forum_name',
- 'AttributeType': 'S'
- },
- 'RangeKeyElement': {
- 'AttributeName': 'subject',
- 'AttributeType': 'S'
- }
- },
- 'ProvisionedThroughput': {
- 'ReadCapacityUnits': 10,
- 'WriteCapacityUnits': 10
- },
- 'TableName': 'messages',
- 'TableSizeBytes': 0,
- 'TableStatus': 'ACTIVE'
- }
- }
-
-
-Adding Items
-------------
-
-Continuing on with our previously created ``messages`` table, adding an::
-
- >>> table = conn.get_table('messages')
- >>> item_data = {
- 'Body': 'http://url_to_lolcat.gif',
- 'SentBy': 'User A',
- 'ReceivedTime': '12/9/2011 11:36:03 PM',
- }
- >>> item = table.new_item(
- # Our hash key is 'forum'
- hash_key='LOLCat Forum',
- # Our range key is 'subject'
- range_key='Check this out!',
- # This has the
- attrs=item_data
- )
-
-The
-:py:meth:`Table.new_item <boto.dynamodb.table.Table.new_item>` method creates
-a new :py:class:`boto.dynamodb.item.Item` instance with your specified
-hash key, range key, and attributes already set.
-:py:class:`Item <boto.dynamodb.item.Item>` is a :py:class:`dict` sub-class,
-meaning you can edit your data as such::
-
- item['a_new_key'] = 'testing'
- del item['a_new_key']
-
-After you are happy with the contents of the item, use
-:py:meth:`Item.put <boto.dynamodb.item.Item.put>` to commit it to DynamoDB::
-
- >>> item.put()
-
-
-Retrieving Items
-----------------
-
-Now, let's check if it got added correctly. Since DynamoDB works under an
-'eventual consistency' mode, we need to specify that we wish a consistent read,
-as follows::
-
- >>> table = conn.get_table('messages')
- >>> item = table.get_item(
- # Your hash key was 'forum_name'
- hash_key='LOLCat Forum',
- # Your range key was 'subject'
- range_key='Check this out!'
- )
- >>> item
- {
- # Note that this was your hash key attribute (forum_name)
- 'forum_name': 'LOLCat Forum',
- # This is your range key attribute (subject)
- 'subject': 'Check this out!'
- 'Body': 'http://url_to_lolcat.gif',
- 'ReceivedTime': '12/9/2011 11:36:03 PM',
- 'SentBy': 'User A',
- }
-
-
-Updating Items
---------------
-
-To update an item's attributes, simply retrieve it, modify the value, then
-:py:meth:`Item.put <boto.dynamodb.item.Item.put>` it again::
-
- >>> table = conn.get_table('messages')
- >>> item = table.get_item(
- hash_key='LOLCat Forum',
- range_key='Check this out!'
- )
- >>> item['SentBy'] = 'User B'
- >>> item.put()
-
-Working with Decimals
----------------------
-
-To avoid the loss of precision, you can stipulate that the
-``decimal.Decimal`` type be used for numeric values::
-
- >>> import decimal
- >>> conn.use_decimals()
- >>> table = conn.get_table('messages')
- >>> item = table.new_item(
- hash_key='LOLCat Forum',
- range_key='Check this out!'
- )
- >>> item['decimal_type'] = decimal.Decimal('1.12345678912345')
- >>> item.put()
- >>> print table.get_item('LOLCat Forum', 'Check this out!')
- {u'forum_name': 'LOLCat Forum', u'decimal_type': Decimal('1.12345678912345'),
- u'subject': 'Check this out!'}
-
-You can enable the usage of ``decimal.Decimal`` by using either the ``use_decimals``
-method, or by passing in the
-:py:class:`Dynamizer <boto.dynamodb.types.Dynamizer>` class for
-the ``dynamizer`` param::
-
- >>> from boto.dynamodb.types import Dynamizer
- >>> conn = boto.connect_dynamodb(dynamizer=Dynamizer)
-
-This mechanism can also be used if you want to customize the encoding/decoding
-process of DynamoDB types.
- -
-Deleting Items
---------------
-
-To delete items, use the
-:py:meth:`Item.delete <boto.dynamodb.item.Item.delete>` method::
-
- >>> table = conn.get_table('messages')
- >>> item = table.get_item(
- hash_key='LOLCat Forum',
- range_key='Check this out!'
- )
- >>> item.delete()
-
-
-Deleting Tables
----------------
-
-.. WARNING::
- Deleting a table will also **permanently** delete all of its contents without prompt. Use carefully.
-
-There are two easy ways to delete a table. Through your top-level
-:py:class:`Layer2 <boto.dynamodb.layer2.Layer2>` object::
-
- >>> conn.delete_table(table)
-
-Or by getting the table, then using
-:py:meth:`Table.delete <boto.dynamodb.table.Table.delete>`::
-
- >>> table = conn.get_table('messages')
- >>> table.delete()
-
-
-.. _Data Model: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/DataModel.html
-.. _Provisioned Throughput: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html
+.. dynamodb_tut: + +============================================ +An Introduction to boto's DynamoDB interface +============================================ + +This tutorial focuses on the boto interface to AWS' DynamoDB_. This tutorial +assumes that you have boto already downloaded and installed. + +.. _DynamoDB: http://aws.amazon.com/dynamodb/ + + +Creating a Connection +--------------------- + +The first step in accessing DynamoDB is to create a connection to the service. +To do so, the most straight forward way is the following:: + + >>> import boto.dynamodb + >>> conn = boto.dynamodb.connect_to_region( + 'us-west-2', + aws_access_key_id='<YOUR_AWS_KEY_ID>', + aws_secret_access_key='<YOUR_AWS_SECRET_KEY>') + >>> conn + <boto.dynamodb.layer2.Layer2 object at 0x3fb3090> + +Bear in mind that if you have your credentials in boto config in your home +directory, the two keyword arguments in the call above are not needed. More +details on configuration can be found in :doc:`boto_config_tut`. + +The :py:func:`boto.dynamodb.connect_to_region` function returns a +:py:class:`boto.dynamodb.layer2.Layer2` instance, which is a high-level API +for working with DynamoDB. Layer2 is a set of abstractions that sit atop +the lower level :py:class:`boto.dynamodb.layer1.Layer1` API, which closely +mirrors the Amazon DynamoDB API. For the purpose of this tutorial, we'll +just be covering Layer2. + + +Listing Tables +-------------- + +Now that we have a DynamoDB connection object, we can then query for a list of +existing tables in that region:: + + >>> conn.list_tables() + ['test-table', 'another-table'] + + +Creating Tables +--------------- + +DynamoDB tables are created with the +:py:meth:`Layer2.create_table <boto.dynamodb.layer2.Layer2.create_table>` +method. While DynamoDB's items (a rough equivalent to a relational DB's row) +don't have a fixed schema, you do need to create a schema for the table's +hash key element, and the optional range key element. This is explained in +greater detail in DynamoDB's `Data Model`_ documentation. + +We'll start by defining a schema that has a hash key and a range key that +are both strings:: + + >>> message_table_schema = conn.create_schema( + hash_key_name='forum_name', + hash_key_proto_value=str, + range_key_name='subject', + range_key_proto_value=str + ) + +The next few things to determine are table name and read/write throughput. We'll +defer explaining throughput to the DynamoDB's `Provisioned Throughput`_ docs. + +We're now ready to create the table:: + + >>> table = conn.create_table( + name='messages', + schema=message_table_schema, + read_units=10, + write_units=10 + ) + >>> table + Table(messages) + +This returns a :py:class:`boto.dynamodb.table.Table` instance, which provides +simple ways to create (put), update, and delete items. + + +Getting a Table +--------------- + +To retrieve an existing table, use +:py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`:: + + >>> conn.list_tables() + ['test-table', 'another-table', 'messages'] + >>> table = conn.get_table('messages') + >>> table + Table(messages) + +:py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>`, like +:py:meth:`Layer2.create_table <boto.dynamodb.layer2.Layer2.create_table>`, +returns a :py:class:`boto.dynamodb.table.Table` instance. + +Keep in mind that :py:meth:`Layer2.get_table <boto.dynamodb.layer2.Layer2.get_table>` +will make an API call to retrieve various attributes of the table including the +creation time, the read and write capacity, and the table schema. If you +already know the schema, you can save an API call and create a +:py:class:`boto.dynamodb.table.Table` object without making any calls to +Amazon DynamoDB:: + + >>> table = conn.table_from_schema( + name='messages', + schema=message_table_schema) + +If you do this, the following fields will have ``None`` values: + + * create_time + * status + * read_units + * write_units + +In addition, the ``item_count`` and ``size_bytes`` will be 0. +If you create a table object directly from a schema object and +decide later that you need to retrieve any of these additional +attributes, you can use the +:py:meth:`Table.refresh <boto.dynamodb.table.Table.refresh>` method:: + + >>> from boto.dynamodb.schema import Schema + >>> table = conn.table_from_schema( + name='messages', + schema=Schema.create(hash_key=('forum_name', 'S'), + range_key=('subject', 'S'))) + >>> print table.write_units + None + >>> # Now we decide we need to know the write_units: + >>> table.refresh() + >>> print table.write_units + 10 + + +The recommended best practice is to retrieve a table object once and +use that object for the duration of your application. So, for example, +instead of this:: + + class Application(object): + def __init__(self, layer2): + self._layer2 = layer2 + + def retrieve_item(self, table_name, key): + return self._layer2.get_table(table_name).get_item(key) + +You can do something like this instead:: + + class Application(object): + def __init__(self, layer2): + self._layer2 = layer2 + self._tables_by_name = {} + + def retrieve_item(self, table_name, key): + table = self._tables_by_name.get(table_name) + if table is None: + table = self._layer2.get_table(table_name) + self._tables_by_name[table_name] = table + return table.get_item(key) + + +Describing Tables +----------------- + +To get a complete description of a table, use +:py:meth:`Layer2.describe_table <boto.dynamodb.layer2.Layer2.describe_table>`:: + + >>> conn.list_tables() + ['test-table', 'another-table', 'messages'] + >>> conn.describe_table('messages') + { + 'Table': { + 'CreationDateTime': 1327117581.624, + 'ItemCount': 0, + 'KeySchema': { + 'HashKeyElement': { + 'AttributeName': 'forum_name', + 'AttributeType': 'S' + }, + 'RangeKeyElement': { + 'AttributeName': 'subject', + 'AttributeType': 'S' + } + }, + 'ProvisionedThroughput': { + 'ReadCapacityUnits': 10, + 'WriteCapacityUnits': 10 + }, + 'TableName': 'messages', + 'TableSizeBytes': 0, + 'TableStatus': 'ACTIVE' + } + } + + +Adding Items +------------ + +Continuing on with our previously created ``messages`` table, adding an:: + + >>> table = conn.get_table('messages') + >>> item_data = { + 'Body': 'http://url_to_lolcat.gif', + 'SentBy': 'User A', + 'ReceivedTime': '12/9/2011 11:36:03 PM', + } + >>> item = table.new_item( + # Our hash key is 'forum' + hash_key='LOLCat Forum', + # Our range key is 'subject' + range_key='Check this out!', + # This has the + attrs=item_data + ) + +The +:py:meth:`Table.new_item <boto.dynamodb.table.Table.new_item>` method creates +a new :py:class:`boto.dynamodb.item.Item` instance with your specified +hash key, range key, and attributes already set. +:py:class:`Item <boto.dynamodb.item.Item>` is a :py:class:`dict` sub-class, +meaning you can edit your data as such:: + + item['a_new_key'] = 'testing' + del item['a_new_key'] + +After you are happy with the contents of the item, use +:py:meth:`Item.put <boto.dynamodb.item.Item.put>` to commit it to DynamoDB:: + + >>> item.put() + + +Retrieving Items +---------------- + +Now, let's check if it got added correctly. Since DynamoDB works under an +'eventual consistency' mode, we need to specify that we wish a consistent read, +as follows:: + + >>> table = conn.get_table('messages') + >>> item = table.get_item( + # Your hash key was 'forum_name' + hash_key='LOLCat Forum', + # Your range key was 'subject' + range_key='Check this out!' + ) + >>> item + { + # Note that this was your hash key attribute (forum_name) + 'forum_name': 'LOLCat Forum', + # This is your range key attribute (subject) + 'subject': 'Check this out!' + 'Body': 'http://url_to_lolcat.gif', + 'ReceivedTime': '12/9/2011 11:36:03 PM', + 'SentBy': 'User A', + } + + +Updating Items +-------------- + +To update an item's attributes, simply retrieve it, modify the value, then +:py:meth:`Item.put <boto.dynamodb.item.Item.put>` it again:: + + >>> table = conn.get_table('messages') + >>> item = table.get_item( + hash_key='LOLCat Forum', + range_key='Check this out!' + ) + >>> item['SentBy'] = 'User B' + >>> item.put() + +Working with Decimals +--------------------- + +To avoid the loss of precision, you can stipulate that the +``decimal.Decimal`` type be used for numeric values:: + + >>> import decimal + >>> conn.use_decimals() + >>> table = conn.get_table('messages') + >>> item = table.new_item( + hash_key='LOLCat Forum', + range_key='Check this out!' + ) + >>> item['decimal_type'] = decimal.Decimal('1.12345678912345') + >>> item.put() + >>> print table.get_item('LOLCat Forum', 'Check this out!') + {u'forum_name': 'LOLCat Forum', u'decimal_type': Decimal('1.12345678912345'), + u'subject': 'Check this out!'} + +You can enable the usage of ``decimal.Decimal`` by using either the ``use_decimals`` +method, or by passing in the +:py:class:`Dynamizer <boto.dynamodb.types.Dynamizer>` class for +the ``dynamizer`` param:: + + >>> from boto.dynamodb.types import Dynamizer + >>> conn = boto.dynamodb.connect_to_region(dynamizer=Dynamizer) + +This mechanism can also be used if you want to customize the encoding/decoding +process of DynamoDB types. + + +Deleting Items +-------------- + +To delete items, use the +:py:meth:`Item.delete <boto.dynamodb.item.Item.delete>` method:: + + >>> table = conn.get_table('messages') + >>> item = table.get_item( + hash_key='LOLCat Forum', + range_key='Check this out!' + ) + >>> item.delete() + + +Deleting Tables +--------------- + +.. WARNING:: + Deleting a table will also **permanently** delete all of its contents without prompt. Use carefully. + +There are two easy ways to delete a table. Through your top-level +:py:class:`Layer2 <boto.dynamodb.layer2.Layer2>` object:: + + >>> conn.delete_table(table) + +Or by getting the table, then using +:py:meth:`Table.delete <boto.dynamodb.table.Table.delete>`:: + + >>> table = conn.get_table('messages') + >>> table.delete() + + +.. _Data Model: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/DataModel.html +.. _Provisioned Throughput: http://docs.amazonwebservices.com/amazondynamodb/latest/developerguide/ProvisionedThroughputIntro.html diff --git a/docs/source/ec2_tut.rst b/docs/source/ec2_tut.rst index f8614dbe..d9ffe38c 100644 --- a/docs/source/ec2_tut.rst +++ b/docs/source/ec2_tut.rst @@ -12,23 +12,19 @@ Creating a Connection --------------------- The first step in accessing EC2 is to create a connection to the service. -There are two ways to do this in boto. The first is:: +The recommended way of doing this in boto is:: - >>> from boto.ec2.connection import EC2Connection - >>> conn = EC2Connection('<AWS_ACCESS_KEY_ID>', '<AWS_SECRET_ACCESS_KEY>') + >>> import boto.ec2 + >>> conn = boto.ec2.connect_to_region("us-west-2", + ... aws_access_key_id='<aws access key>', + ... aws_secret_access_key='<aws secret key>') -At this point the variable conn will point to an EC2Connection object. In -this example, the AWS access key and AWS secret key are passed in to the -method explicitely. Alternatively, you can set the boto config environment variables -and then call the constructor without any arguments, like this:: +At this point the variable ``conn`` will point to an EC2Connection object. In +this example, the AWS access key and AWS secret key are passed in to the method +explicitly. Alternatively, you can set the boto config environment variables +and then simply specify which region you want as follows:: - >>> conn = EC2Connection() - -There is also a shortcut function in the boto package, called connect_ec2 -that may provide a slightly easier means of creating a connection:: - - >>> import boto - >>> conn = boto.connect_ec2() + >>> conn = boto.ec2.connect_to_region("us-west-2") In either case, conn will point to an EC2Connection object which we will use throughout the remainder of this tutorial. @@ -41,7 +37,7 @@ stop and terminate instances. In its most primitive form, you can launch an instance as follows:: >>> conn.run_instances('<ami-image-id>') - + This will launch an instance in the specified region with the default parameters. You will not be able to SSH into this machine, as it doesn't have a security group set. See :doc:`security_groups` for details on creating one. @@ -88,3 +84,95 @@ you can request instance termination. To do so you can use the call bellow:: Please use with care since once you request termination for an instance there is no turning back. +Checking What Instances Are Running +----------------------------------- +You can also get information on your currently running instances:: + + >>> reservations = conn.get_all_instances() + >>> reservations + [Reservation:r-00000000] + +A reservation corresponds to a command to start instances. You can see what +instances are associated with a reservation:: + + >>> instances = reservations[0].instances + >>> instances + [Instance:i-00000000] + +An instance object allows you get more meta-data available about the instance:: + + >>> inst = instances[0] + >>> inst.instance_type + u'c1.xlarge' + >>> inst.placement + u'us-west-2' + +In this case, we can see that our instance is a c1.xlarge instance in the +`us-west-2` availability zone. + +================================= +Using Elastic Block Storage (EBS) +================================= + + +EBS Basics +---------- + +EBS can be used by EC2 instances for permanent storage. Note that EBS volumes +must be in the same availability zone as the EC2 instance you wish to attach it +to. + +To actually create a volume you will need to specify a few details. The +following example will create a 50GB EBS in one of the `us-west-2` availability +zones:: + + >>> vol = conn.create_volume(50, "us-west-2") + >>> vol + Volume:vol-00000000 + +You can check that the volume is now ready and available:: + + >>> curr_vol = conn.get_all_volumes([vol.id])[0] + >>> curr_vol.status + u'available' + >>> curr_vol.zone + u'us-west-2' + +We can now attach this volume to the EC2 instance we created earlier, making it +available as a new device:: + + >>> conn.attach_volume (vol.id, inst.id, "/dev/sdx") + u'attaching' + +You will now have a new volume attached to your instance. Note that with some +Linux kernels, `/dev/sdx` may get translated to `/dev/xvdx`. This device can +now be used as a normal block device within Linux. + +Working With Snapshots +---------------------- + +Snapshots allow you to make point-in-time snapshots of an EBS volume for future +recovery. Snapshots allow you to create incremental backups, and can also be +used to instantiate multiple new volumes. Snapshots can also be used to move +EBS volumes across availability zones or making backups to S3. + +Creating a snapshot is easy:: + + >>> snapshot = conn.create_snapshot(vol.id, 'My snapshot') + >>> snapshot + Snapshot:snap-00000000 + +Once you have a snapshot, you can create a new volume from it. Volumes are +created lazily from snapshots, which means you can start using such a volume +straight away:: + + >>> new_vol = snapshot.create_volume('us-west-2') + >>> conn.attach_volume (new_vol.id, inst.id, "/dev/sdy") + u'attaching' + +If you no longer need a snapshot, you can also easily delete it:: + + >>> conn.delete_snapshot(snapshot.id) + True + + diff --git a/docs/source/elb_tut.rst b/docs/source/elb_tut.rst index 10d3ca29..4d5661c4 100644 --- a/docs/source/elb_tut.rst +++ b/docs/source/elb_tut.rst @@ -43,48 +43,27 @@ Creating a Connection The first step in accessing ELB is to create a connection to the service. ->>> import boto ->>> conn = boto.connect_elb( - aws_access_key_id='YOUR-KEY-ID-HERE', - aws_secret_access_key='YOUR-SECRET-HERE' - ) - - -A Note About Regions and Endpoints -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Like EC2, the ELB service has a different endpoint for each region. By default -the US East endpoint is used. To choose a specific region, instantiate the -ELBConnection object with that region's information. - ->>> from boto.regioninfo import RegionInfo ->>> reg = RegionInfo( - name='eu-west-1', - endpoint='elasticloadbalancing.eu-west-1.amazonaws.com' - ) ->>> conn = boto.connect_elb( - aws_access_key_id='YOUR-KEY-ID-HERE', - aws_secret_access_key='YOUR-SECRET-HERE', - region=reg - ) - -Another way to connect to an alternative region is like this: +the US East endpoint is used. To choose a specific region, use the +``connect_to_region`` function:: ->>> import boto.ec2.elb ->>> elb = boto.ec2.elb.connect_to_region('eu-west-1') + >>> import boto.ec2.elb + >>> elb = boto.ec2.elb.connect_to_region('us-west-2') Here's yet another way to discover what regions are available and then -connect to one: - ->>> import boto.ec2.elb ->>> regions = boto.ec2.elb.regions() ->>> regions -[RegionInfo:us-east-1, - RegionInfo:ap-northeast-1, - RegionInfo:us-west-1, - RegionInfo:ap-southeast-1, - RegionInfo:eu-west-1] ->>> elb = regions[-1].connect() +connect to one:: + + >>> import boto.ec2.elb + >>> regions = boto.ec2.elb.regions() + >>> regions + [RegionInfo:us-east-1, + RegionInfo:ap-northeast-1, + RegionInfo:us-west-1, + RegionInfo:us-west-2, + RegionInfo:ap-southeast-1, + RegionInfo:eu-west-1] + >>> elb = regions[-1].connect() Alternatively, edit your boto.cfg with the default ELB endpoint to use:: @@ -194,9 +173,9 @@ Finally, let's create a load balancer in the US region that listens on ports and TCP. We want the load balancer to span the availability zones *us-east-1a* and *us-east-1b*: ->>> regions = ['us-east-1a', 'us-east-1b'] +>>> zones = ['us-east-1a', 'us-east-1b'] >>> ports = [(80, 8080, 'http'), (443, 8443, 'tcp')] ->>> lb = conn.create_load_balancer('my-lb', regions, ports) +>>> lb = conn.create_load_balancer('my-lb', zones, ports) >>> # This is from the previous section. >>> lb.configure_health_check(hc) diff --git a/docs/source/emr_tut.rst b/docs/source/emr_tut.rst index 996781ee..c42d188f 100644 --- a/docs/source/emr_tut.rst +++ b/docs/source/emr_tut.rst @@ -27,18 +27,18 @@ and then call the constructor without any arguments, like this: >>> conn = EmrConnection() -There is also a shortcut function in the boto package called connect_emr -that may provide a slightly easier means of creating a connection: +There is also a shortcut function in boto +that makes it easy to create EMR connections: ->>> import boto ->>> conn = boto.connect_emr() +>>> import boto.emr +>>> conn = boto.emr.connect_to_region('us-west-2') In either case, conn points to an EmrConnection object which we will use throughout the remainder of this tutorial. Creating Streaming JobFlow Steps -------------------------------- -Upon creating a connection to Elastic Mapreduce you will next +Upon creating a connection to Elastic Mapreduce you will next want to create one or more jobflow steps. There are two types of steps, streaming and custom jar, both of which have a class in the boto Elastic Mapreduce implementation. @@ -76,8 +76,8 @@ Creating JobFlows ----------------- Once you have created one or more jobflow steps, you will next want to create and run a jobflow. Creating a jobflow that executes either of the steps we created above can be accomplished by: ->>> import boto ->>> conn = boto.connect_emr() +>>> import boto.emr +>>> conn = boto.emr.connect_to_region('us-west-2') >>> jobid = conn.run_jobflow(name='My jobflow', ... log_uri='s3://<my log uri>/jobflow_logs', ... steps=[step]) @@ -102,7 +102,6 @@ Terminating JobFlows -------------------- By default when all the steps of a jobflow have finished or failed the jobflow terminates. However, if you set the keep_alive parameter to True or just want to halt the execution of a jobflow early you can terminate a jobflow by: ->>> import boto ->>> conn = boto.connect_emr() +>>> import boto.emr +>>> conn = boto.emr.connect_to_region('us-west-2') >>> conn.terminate_jobflow('<jobflow id>') - diff --git a/docs/source/getting_started.rst b/docs/source/getting_started.rst new file mode 100644 index 00000000..ab8e306f --- /dev/null +++ b/docs/source/getting_started.rst @@ -0,0 +1,177 @@ +.. _getting-started: + +========================= +Getting Started with Boto +========================= + +This tutorial will walk you through installing and configuring ``boto``, as +well how to use it to make API calls. + +This tutorial assumes you are familiar with Python & that you have registered +for an `Amazon Web Services`_ account. You'll need retrieve your +``Access Key ID`` and ``Secret Access Key`` from the web-based console. + +.. _`Amazon Web Services`: https://aws.amazon.com/ + + +Installing Boto +--------------- + +You can use ``pip`` to install the latest released version of ``boto``:: + + pip install boto + +If you want to install ``boto`` from source:: + + git clone git://github.com/boto/boto.git + cd boto + python setup.py install + + +Using Virtual Environments +-------------------------- + +Another common way to install ``boto`` is to use a ``virtualenv``, which +provides isolated environments. First, install the ``virtualenv`` Python +package:: + + pip install virtualenv + +Next, create a virtual environment by using the ``virtualenv`` command and +specifying where you want the virtualenv to be created (you can specify +any directory you like, though this example allows for compatibility with +``virtualenvwrapper``):: + + mkdir ~/.virtualenvs + virtualenv ~/.virtualenvs/boto + +You can now activate the virtual environment:: + + source ~/.virtualenvs/boto/bin/activate + +Now, any usage of ``python`` or ``pip`` (within the current shell) will default +to the new, isolated version within your virtualenv. + +You can now install ``boto`` into this virtual environment:: + + pip install boto + +When you are done using ``boto``, you can deactivate your virtual environment:: + + deactivate + +If you are creating a lot of virtual environments, `virtualenvwrapper`_ +is an excellent tool that lets you easily manage your virtual environments. + +.. _`virtualenvwrapper`: http://virtualenvwrapper.readthedocs.org/en/latest/ + + +Configuring Boto Credentials +---------------------------- + +You have a few options for configuring ``boto`` (see :doc:`boto_config_tut`). +For this tutorial, we'll be using a configuration file. First, create a +``~/.boto`` file with these contents:: + + [Credentials] + aws_access_key_id = YOURACCESSKEY + aws_secret_access_key = YOURSECRETKEY + +``boto`` supports a number of configuration values. For more information, +see :doc:`boto_config_tut`. The above file, however, is all we need for now. +You're now ready to use ``boto``. + + +Making Connections +------------------ + +``boto`` provides a number of convenience functions to simplify connecting to a +service. For example, to work with S3, you can run:: + + >>> import boto + >>> s3 = boto.connect_s3() + +If you want to connect to a different region, you can import the service module +and use the ``connect_to_region`` functions. For example, to create an EC2 +client in 'us-west-2' region, you'd run the following:: + + >>> import boto.ec2 + >>> ec2 = boto.ec2.connect_to_region('us-west-2') + + +Troubleshooting Connections +--------------------------- + +When calling the various ``connect_*`` functions, you might run into an error +like this:: + + >>> import boto + >>> s3 = boto.connect_s3() + Traceback (most recent call last): + File "<stdin>", line 1, in <module> + File "boto/__init__.py", line 121, in connect_s3 + return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs) + File "boto/s3/connection.py", line 171, in __init__ + validate_certs=validate_certs) + File "boto/connection.py", line 548, in __init__ + host, config, self.provider, self._required_auth_capability()) + File "boto/auth.py", line 668, in get_auth_handler + 'Check your credentials' % (len(names), str(names))) + boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials + +This is because ``boto`` cannot find credentials to use. Verify that you have +created a ``~/.boto`` file as shown above. You can also turn on debug logging +to verify where your credentials are coming from:: + + >>> import boto + >>> boto.set_stream_logger('boto') + >>> s3 = boto.connect_s3() + 2012-12-10 17:15:03,799 boto [DEBUG]:Using access key found in config file. + 2012-12-10 17:15:03,799 boto [DEBUG]:Using secret key found in config file. + + +Interacting with AWS Services +----------------------------- + +Once you have a client for the specific service you want, there are methods on +that object that will invoke API operations for that service. The following +code demonstrates how to create a bucket and put an object in that bucket:: + + >>> import boto + >>> import time + >>> s3 = boto.connect_s3() + + # Create a new bucket. Buckets must have a globally unique name (not just + # unique to your account). + >>> bucket = s3.create_bucket('boto-demo-%s' % int(time.time())) + + # Create a new key/value pair. + >>> key = bucket.new_key('mykey') + >>> key.set_contents_from_string("Hello World!") + + # Sleep to ensure the data is eventually there. + >>> time.sleep(2) + + # Retrieve the contents of ``mykey``. + >>> print key.get_contents_as_string() + 'Hello World!' + + # Delete the key. + >>> key.delete() + # Delete the bucket. + >>> bucket.delete() + +Each service supports a different set of commands. You'll want to refer to the +other guides & API references in this documentation, as well as referring to +the `official AWS API`_ documentation. + +.. _`official AWS API`: https://aws.amazon.com/documentation/ + +Next Steps +---------- + +For many of the services that ``boto`` supports, there are tutorials as +well as detailed API documentation. If you are interested in a specific +service, the tutorial for the service is a good starting point. For instance, +if you'd like more information on S3, check out the :ref:`S3 Tutorial <s3_tut>` +and the :doc:`S3 API reference <ref/s3>`. diff --git a/docs/source/index.rst b/docs/source/index.rst index 17777244..090de3b6 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -9,6 +9,13 @@ offered by `Amazon Web Services`_. .. _Amazon Web Services: http://aws.amazon.com/ +Getting Started +--------------- + +If you've never used ``boto`` before, you should read the +:doc:`Getting Started with Boto <getting_started>` guide to get familiar +with ``boto`` & its usage. + Currently Supported Services ---------------------------- @@ -28,8 +35,10 @@ Currently Supported Services * :doc:`SimpleDB <simpledb_tut>` -- (:doc:`API Reference <ref/sdb>`) * :doc:`DynamoDB <dynamodb_tut>` -- (:doc:`API Reference <ref/dynamodb>`) - * Relational Data Services (RDS) -- (:doc:`API Reference <ref/rds>`) + * DynamoDB2 -- (:doc:`API Reference <ref/dynamodb2>`) + * :doc:`Relational Data Services (RDS) <rds_tut>` -- (:doc:`API Reference <ref/rds>`) * ElastiCache -- (:doc:`API Reference <ref/elasticache>`) + * Redshift -- (:doc:`API Reference <ref/redshift>`) * **Deployment and Management** @@ -97,6 +106,7 @@ Additional Resources .. toctree:: :hidden: + getting_started ec2_tut security_groups ref/ec2 @@ -111,6 +121,7 @@ Additional Resources ref/sdb_db dynamodb_tut ref/dynamodb + rds_tut ref/rds ref/cloudformation ref/iam @@ -136,6 +147,12 @@ Additional Resources boto_config_tut ref/index documentation + contributing + ref/datapipeline + ref/elasticache + ref/elastictranscoder + ref/redshift + ref/dynamodb2 Indices and tables diff --git a/docs/source/rds_tut.rst b/docs/source/rds_tut.rst new file mode 100644 index 00000000..6955cbe3 --- /dev/null +++ b/docs/source/rds_tut.rst @@ -0,0 +1,108 @@ +.. _rds_tut: + +======================================= +An Introduction to boto's RDS interface +======================================= + +This tutorial focuses on the boto interface to the Relational Database Service +from Amazon Web Services. This tutorial assumes that you have boto already +downloaded and installed, and that you wish to setup a MySQL instance in RDS. + +Creating a Connection +--------------------- +The first step in accessing RDS is to create a connection to the service. +The recommended method of doing this is as follows:: + + >>> import boto.rds + >>> conn = boto.rds.connect_to_region( + ... "us-west-2", + ... aws_access_key_id='<aws access key'>, + ... aws_secret_access_key='<aws secret key>') + +At this point the variable conn will point to an RDSConnection object in the +US-WEST-2 region. Bear in mind that just as any other AWS service, RDS is +region-specific. In this example, the AWS access key and AWS secret key are +passed in to the method explicitely. Alternatively, you can set the environment +variables: + +* ``AWS_ACCESS_KEY_ID`` - Your AWS Access Key ID +* ``AWS_SECRET_ACCESS_KEY`` - Your AWS Secret Access Key + +and then simply call:: + + >>> import boto.rds + >>> conn = boto.rds.connect_to_region("us-west-2") + +In either case, conn will point to an RDSConnection object which we will +use throughout the remainder of this tutorial. + +Starting an RDS Instance +------------------------ + +Creating a DB instance is easy. You can do so as follows:: + + >>> db = conn.create_dbinstance("db-master-1", 10, 'db.m1.small', 'root', 'hunter2') + +This example would create a DB identified as ``db-master-1`` with 10GB of +storage. This instance would be running on ``db.m1.small`` type, with the login +name being ``root``, and the password ``hunter2``. + +To check on the status of your RDS instance, you will have to query the RDS connection again:: + + >>> instances = conn.get_all_dbinstances("db-master-1") + >>> instances + [DBInstance:db-master-1] + >>> db = instances[0] + >>> db.status + u'available' + >>> db.endpoint + (u'db-master-1.aaaaaaaaaa.us-west-2.rds.amazonaws.com', 3306) + +Creating a Security Group +------------------------- + +Before you can actually connect to this RDS service, you must first +create a security group. You can add a CIDR range or an :py:class:`EC2 security +group <boto.ec2.securitygroup.SecurityGroup>` to your :py:class:`DB security +group <boto.rds.dbsecuritygroup.DBSecurityGroup>` :: + + >>> sg = conn.create_dbsecurity_group('web_servers', 'Web front-ends') + >>> sg.authorize(cidr_ip='10.3.2.45/32') + True + +You can then associate this security group with your RDS instance:: + + >>> db.modify(security_groups=[sg]) + + +Connecting to your New Database +------------------------------- + +Once you have reached this step, you can connect to your RDS instance as you +would with any other MySQL instance:: + + >>> db.endpoint + (u'db-master-1.aaaaaaaaaa.us-west-2.rds.amazonaws.com', 3306) + + % mysql -h db-master-1.aaaaaaaaaa.us-west-2.rds.amazonaws.com -u root -phunter2 + mysql> + + +Making a backup +--------------- + +You can also create snapshots of your database very easily:: + + >>> db.snapshot('db-master-1-2013-02-05') + DBSnapshot:db-master-1-2013-02-05 + + +Once this snapshot is complete, you can create a new database instance from +it:: + + >>> db2 = conn.restore_dbinstance_from_dbsnapshot( + ... 'db-master-1-2013-02-05', + ... 'db-restored-1', + ... 'db.m1.small', + ... 'us-west-2') + diff --git a/docs/source/ref/cloudsearch.rst b/docs/source/ref/cloudsearch.rst index 14671ee5..1610200a 100644 --- a/docs/source/ref/cloudsearch.rst +++ b/docs/source/ref/cloudsearch.rst @@ -7,7 +7,7 @@ Cloudsearch boto.cloudsearch ---------------- -.. automodule:: boto.swf +.. automodule:: boto.cloudsearch :members: :undoc-members: diff --git a/docs/source/ref/dynamodb2.rst b/docs/source/ref/dynamodb2.rst new file mode 100644 index 00000000..cfd1b6a1 --- /dev/null +++ b/docs/source/ref/dynamodb2.rst @@ -0,0 +1,26 @@ +.. ref-dynamodb2 + +========= +DynamoDB2 +========= + +boto.dynamodb2 +-------------- + +.. automodule:: boto.dynamodb2 + :members: + :undoc-members: + +boto.dynamodb2.layer1 +--------------------- + +.. automodule:: boto.dynamodb2.layer1 + :members: + :undoc-members: + +boto.dynamodb2.exceptions +------------------------- + +.. automodule:: boto.dynamodb2.exceptions + :members: + :undoc-members: diff --git a/docs/source/ref/index.rst b/docs/source/ref/index.rst index d01b0909..3def15d7 100644 --- a/docs/source/ref/index.rst +++ b/docs/source/ref/index.rst @@ -27,6 +27,7 @@ API Reference mws pyami rds + redshift route53 s3 sdb diff --git a/docs/source/ref/redshift.rst b/docs/source/ref/redshift.rst new file mode 100644 index 00000000..b3d84636 --- /dev/null +++ b/docs/source/ref/redshift.rst @@ -0,0 +1,26 @@ +.. _ref-redshift: + +======== +Redshift +======== + +boto.redshift +------------- + +.. automodule:: boto.redshift + :members: + :undoc-members: + +boto.redshift.layer1 +-------------------- + +.. automodule:: boto.redshift.layer1 + :members: + :undoc-members: + +boto.redshift.exceptions +------------------------ + +.. automodule:: boto.redshift.exceptions + :members: + :undoc-members: diff --git a/docs/source/s3_tut.rst b/docs/source/s3_tut.rst index 47841256..fc75e108 100644 --- a/docs/source/s3_tut.rst +++ b/docs/source/s3_tut.rst @@ -28,10 +28,10 @@ and then call the constructor without any arguments, like this: >>> conn = S3Connection() There is also a shortcut function in the boto package, called connect_s3 -that may provide a slightly easier means of creating a connection: +that may provide a slightly easier means of creating a connection:: ->>> import boto ->>> conn = boto.connect_s3() + >>> import boto + >>> conn = boto.connect_s3() In either case, conn will point to an S3Connection object which we will use throughout the remainder of this tutorial. @@ -44,14 +44,14 @@ create a bucket. A bucket is a container used to store key/value pairs in S3. A bucket can hold an unlimited amount of data so you could potentially have just one bucket in S3 for all of your information. Or, you could create separate buckets for different types of data. You can figure all of that out -later, first let's just create a bucket. That can be accomplished like this: +later, first let's just create a bucket. That can be accomplished like this:: ->>> bucket = conn.create_bucket('mybucket') -Traceback (most recent call last): - File "<stdin>", line 1, in ? - File "boto/connection.py", line 285, in create_bucket - raise S3CreateError(response.status, response.reason) -boto.exception.S3CreateError: S3Error[409]: Conflict + >>> bucket = conn.create_bucket('mybucket') + Traceback (most recent call last): + File "<stdin>", line 1, in ? + File "boto/connection.py", line 285, in create_bucket + raise S3CreateError(response.status, response.reason) + boto.exception.S3CreateError: S3Error[409]: Conflict Whoa. What happended there? Well, the thing you have to know about buckets is that they are kind of like domain names. It's one flat name @@ -72,21 +72,26 @@ Creating a Bucket In Another Location The example above assumes that you want to create a bucket in the standard US region. However, it is possible to create buckets in other locations. To do so, first import the Location object from the -boto.s3.connection module, like this: - ->>> from boto.s3.connection import Location ->>> dir(Location) -['DEFAULT', 'EU', 'USWest', 'APSoutheast', '__doc__', '__module__'] ->>> - -As you can see, the Location object defines three possible locations; -DEFAULT, EU, USWest, and APSoutheast. By default, the location is the -empty string which is interpreted as the US Classic Region, the -original S3 region. However, by specifying another location at the -time the bucket is created, you can instruct S3 to create the bucket -in that location. For example: - ->>> conn.create_bucket('mybucket', location=Location.EU) +boto.s3.connection module, like this:: + + >>> from boto.s3.connection import Location + >>> print '\n'.join(i for i in dir(Location) if i[0].isupper()) + APNortheast + APSoutheast + APSoutheast2 + DEFAULT + EU + SAEast + USWest + USWest2 + +As you can see, the Location object defines a number of possible locations. By +default, the location is the empty string which is interpreted as the US +Classic Region, the original S3 region. However, by specifying another +location at the time the bucket is created, you can instruct S3 to create the +bucket in that location. For example:: + + >>> conn.create_bucket('mybucket', location=Location.EU) will create the bucket in the EU region (assuming the name is available). @@ -99,34 +104,36 @@ or what format you use to store it. All you need is a key that is unique within your bucket. The Key object is used in boto to keep track of data stored in S3. To store -new data in S3, start by creating a new Key object: +new data in S3, start by creating a new Key object:: ->>> from boto.s3.key import Key ->>> k = Key(bucket) ->>> k.key = 'foobar' ->>> k.set_contents_from_string('This is a test of S3') + >>> from boto.s3.key import Key + >>> k = Key(bucket) + >>> k.key = 'foobar' + >>> k.set_contents_from_string('This is a test of S3') The net effect of these statements is to create a new object in S3 with a key of "foobar" and a value of "This is a test of S3". To validate that -this worked, quit out of the interpreter and start it up again. Then: +this worked, quit out of the interpreter and start it up again. Then:: ->>> import boto ->>> c = boto.connect_s3() ->>> b = c.create_bucket('mybucket') # substitute your bucket name here ->>> from boto.s3.key import Key ->>> k = Key(b) ->>> k.key = 'foobar' ->>> k.get_contents_as_string() -'This is a test of S3' + >>> import boto + >>> c = boto.connect_s3() + >>> b = c.create_bucket('mybucket') # substitute your bucket name here + >>> from boto.s3.key import Key + >>> k = Key(b) + >>> k.key = 'foobar' + >>> k.get_contents_as_string() + 'This is a test of S3' So, we can definitely store and retrieve strings. A more interesting example may be to store the contents of a local file in S3 and then retrieve the contents to another local file. ->>> k = Key(b) ->>> k.key = 'myfile' ->>> k.set_contents_from_filename('foo.jpg') ->>> k.get_contents_to_filename('bar.jpg') +:: + + >>> k = Key(b) + >>> k.key = 'myfile' + >>> k.set_contents_from_filename('foo.jpg') + >>> k.get_contents_to_filename('bar.jpg') There are a couple of things to note about this. When you send data to S3 from a file or filename, boto will attempt to determine the correct @@ -136,24 +143,77 @@ guessing. The other thing to note is that boto does stream the content to and from S3 so you should be able to send and receive large files without any problem. +Accessing A Bucket +------------------ + +Once a bucket exists, you can access it by getting the bucket. For example:: + + >>> mybucket = conn.get_bucket('mybucket') # Substitute in your bucket name + >>> mybucket.list() + <listing of keys in the bucket) + +By default, this method tries to validate the bucket's existence. You can +override this behavior by passing ``validate=False``.:: + + >>> nonexistent = conn.get_bucket('i-dont-exist-at-all', validate=False) + +If the bucket does not exist, a ``S3ResponseError`` will commonly be thrown. If +you'd rather not deal with any exceptions, you can use the ``lookup`` method.:: + + >>> nonexistent = conn.lookup('i-dont-exist-at-all') + >>> if nonexistent is None: + ... print "No such bucket!" + ... + No such bucket! + +Deleting A Bucket +----------------- + +Removing a bucket can be done using the ``delete_bucket`` method. For example:: + + >>> conn.delete_bucket('mybucket') # Substitute in your bucket name + +The bucket must be empty of keys or this call will fail & an exception will be +raised. You can remove a non-empty bucket by doing something like:: + + >>> full_bucket = conn.get_bucket('bucket-to-delete') + # It's full of keys. Delete them all. + >>> for key in full_bucket.list(): + ... key.delete() + ... + # The bucket is empty now. Delete it. + >>> conn.delete_bucket('bucket-to-delete') + +.. warning:: + + This method can cause data loss! Be very careful when using it. + + Additionally, be aware that using the above method for removing all keys + and deleting the bucket involves a request for each key. As such, it's not + particularly fast & is very chatty. + Listing All Available Buckets ----------------------------- In addition to accessing specific buckets via the create_bucket method you can also get a list of all available buckets that you have created. ->>> rs = conn.get_all_buckets() +:: + + >>> rs = conn.get_all_buckets() This returns a ResultSet object (see the SQS Tutorial for more info on ResultSet objects). The ResultSet can be used as a sequence or list type object to retrieve Bucket objects. ->>> len(rs) -11 ->>> for b in rs: -... print b.name -... -<listing of available buckets> ->>> b = rs[0] +:: + + >>> len(rs) + 11 + >>> for b in rs: + ... print b.name + ... + <listing of available buckets> + >>> b = rs[0] Setting / Getting the Access Control List for Buckets and Keys -------------------------------------------------------------- @@ -195,17 +255,19 @@ You can also retrieve the current ACL for a Bucket or Key object using the get_acl object. This method parses the AccessControlPolicy response sent by S3 and creates a set of Python objects that represent the ACL. ->>> acp = b.get_acl() ->>> acp -<boto.acl.Policy instance at 0x2e6940> ->>> acp.acl -<boto.acl.ACL instance at 0x2e69e0> ->>> acp.acl.grants -[<boto.acl.Grant instance at 0x2e6a08>] ->>> for grant in acp.acl.grants: -... print grant.permission, grant.display_name, grant.email_address, grant.id -... -FULL_CONTROL <boto.user.User instance at 0x2e6a30> +:: + + >>> acp = b.get_acl() + >>> acp + <boto.acl.Policy instance at 0x2e6940> + >>> acp.acl + <boto.acl.ACL instance at 0x2e69e0> + >>> acp.acl.grants + [<boto.acl.Grant instance at 0x2e6a08>] + >>> for grant in acp.acl.grants: + ... print grant.permission, grant.display_name, grant.email_address, grant.id + ... + FULL_CONTROL <boto.user.User instance at 0x2e6a30> The Python objects representing the ACL can be found in the acl.py module of boto. @@ -213,10 +275,10 @@ of boto. Both the Bucket object and the Key object also provide shortcut methods to simplify the process of granting individuals specific access. For example, if you want to grant an individual user READ -access to a particular object in S3 you could do the following: +access to a particular object in S3 you could do the following:: ->>> key = b.lookup('mykeytoshare') ->>> key.add_email_grant('READ', 'foo@bar.com') + >>> key = b.lookup('mykeytoshare') + >>> key.add_email_grant('READ', 'foo@bar.com') The email address provided should be the one associated with the users AWS account. There is a similar method called add_user_grant that accepts the @@ -227,23 +289,23 @@ Setting/Getting Metadata Values on Key Objects S3 allows arbitrary user metadata to be assigned to objects within a bucket. To take advantage of this S3 feature, you should use the set_metadata and get_metadata methods of the Key object to set and retrieve metadata associated -with an S3 object. For example: +with an S3 object. For example:: ->>> k = Key(b) ->>> k.key = 'has_metadata' ->>> k.set_metadata('meta1', 'This is the first metadata value') ->>> k.set_metadata('meta2', 'This is the second metadata value') ->>> k.set_contents_from_filename('foo.txt') + >>> k = Key(b) + >>> k.key = 'has_metadata' + >>> k.set_metadata('meta1', 'This is the first metadata value') + >>> k.set_metadata('meta2', 'This is the second metadata value') + >>> k.set_contents_from_filename('foo.txt') This code associates two metadata key/value pairs with the Key k. To retrieve -those values later: +those values later:: ->>> k = b.get_key('has_metadata') ->>> k.get_metadata('meta1') -'This is the first metadata value' ->>> k.get_metadata('meta2') -'This is the second metadata value' ->>> + >>> k = b.get_key('has_metadata') + >>> k.get_metadata('meta1') + 'This is the first metadata value' + >>> k.get_metadata('meta2') + 'This is the second metadata value' + >>> Setting/Getting/Deleting CORS Configuration on a Bucket ------------------------------------------------------- @@ -254,12 +316,12 @@ in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources. -To create a CORS configuration and associate it with a bucket: +To create a CORS configuration and associate it with a bucket:: ->>> from boto.s3.cors import CORSConfiguration ->>> cors_cfg = CORSConfiguration() ->>> cors_cfg.add_rule(['PUT', 'POST', 'DELETE'], 'https://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption') ->>> cors_cfg.add_rule('GET', '*') + >>> from boto.s3.cors import CORSConfiguration + >>> cors_cfg = CORSConfiguration() + >>> cors_cfg.add_rule(['PUT', 'POST', 'DELETE'], 'https://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption') + >>> cors_cfg.add_rule('GET', '*') The above code creates a CORS configuration object with two rules. @@ -270,20 +332,20 @@ The above code creates a CORS configuration object with two rules. return any requested headers. * The second rule allows cross-origin GET requests from all origins. -To associate this configuration with a bucket: +To associate this configuration with a bucket:: ->>> import boto ->>> c = boto.connect_s3() ->>> bucket = c.lookup('mybucket') ->>> bucket.set_cors(cors_cfg) + >>> import boto + >>> c = boto.connect_s3() + >>> bucket = c.lookup('mybucket') + >>> bucket.set_cors(cors_cfg) -To retrieve the CORS configuration associated with a bucket: +To retrieve the CORS configuration associated with a bucket:: ->>> cors_cfg = bucket.get_cors() + >>> cors_cfg = bucket.get_cors() -And, finally, to delete all CORS configurations from a bucket: +And, finally, to delete all CORS configurations from a bucket:: ->>> bucket.delete_cors() + >>> bucket.delete_cors() Transitioning Objects to Glacier -------------------------------- @@ -298,48 +360,50 @@ configurations are assigned to buckets and require these parameters: * The date (or time period) when you want S3 to perform these actions. For example, given a bucket ``s3-glacier-boto-demo``, we can first retrieve the -bucket: +bucket:: ->>> import boto ->>> c = boto.connect_s3() ->>> bucket = c.get_bucket('s3-glacier-boto-demo') + >>> import boto + >>> c = boto.connect_s3() + >>> bucket = c.get_bucket('s3-glacier-boto-demo') Then we can create a lifecycle object. In our example, we want all objects under ``logs/*`` to transition to Glacier 30 days after the object is created. ->>> from boto.s3.lifecycle import Lifecycle, Transition, Rule ->>> to_glacier = Transition(days=30, storage_class='GLACIER') ->>> rule = Rule('ruleid', 'logs/', 'Enabled', transition=to_glacier) ->>> lifecycle = Lifecycle() ->>> lifecycle.append(rule) +:: + + >>> from boto.s3.lifecycle import Lifecycle, Transition, Rule + >>> to_glacier = Transition(days=30, storage_class='GLACIER') + >>> rule = Rule('ruleid', 'logs/', 'Enabled', transition=to_glacier) + >>> lifecycle = Lifecycle() + >>> lifecycle.append(rule) .. note:: For API docs for the lifecycle objects, see :py:mod:`boto.s3.lifecycle` -We can now configure the bucket with this lifecycle policy: +We can now configure the bucket with this lifecycle policy:: ->>> bucket.configure_lifecycle(lifecycle) + >>> bucket.configure_lifecycle(lifecycle) True -You can also retrieve the current lifecycle policy for the bucket: +You can also retrieve the current lifecycle policy for the bucket:: ->>> current = bucket.get_lifecycle_config() ->>> print current[0].transition -<Transition: in: 30 days, GLACIER> + >>> current = bucket.get_lifecycle_config() + >>> print current[0].transition + <Transition: in: 30 days, GLACIER> When an object transitions to Glacier, the storage class will be -updated. This can be seen when you **list** the objects in a bucket: +updated. This can be seen when you **list** the objects in a bucket:: ->>> for key in bucket.list(): -... print key, key.storage_class -... -<Key: s3-glacier-boto-demo,logs/testlog1.log> GLACIER + >>> for key in bucket.list(): + ... print key, key.storage_class + ... + <Key: s3-glacier-boto-demo,logs/testlog1.log> GLACIER -You can also use the prefix argument to the ``bucket.list`` method: +You can also use the prefix argument to the ``bucket.list`` method:: ->>> print list(b.list(prefix='logs/testlog1.log'))[0].storage_class -u'GLACIER' + >>> print list(b.list(prefix='logs/testlog1.log'))[0].storage_class + u'GLACIER' Restoring Objects from Glacier @@ -351,34 +415,36 @@ method of the key object. The ``restore`` method takes an integer that specifies the number of days to keep the object in S3. ->>> import boto ->>> c = boto.connect_s3() ->>> bucket = c.get_bucket('s3-glacier-boto-demo') ->>> key = bucket.get_key('logs/testlog1.log') ->>> key.restore(days=5) +:: + + >>> import boto + >>> c = boto.connect_s3() + >>> bucket = c.get_bucket('s3-glacier-boto-demo') + >>> key = bucket.get_key('logs/testlog1.log') + >>> key.restore(days=5) It takes about 4 hours for a restore operation to make a copy of the archive available for you to access. While the object is being restored, the -``ongoing_restore`` attribute will be set to ``True``: +``ongoing_restore`` attribute will be set to ``True``:: ->>> key = bucket.get_key('logs/testlog1.log') ->>> print key.ongoing_restore -True + >>> key = bucket.get_key('logs/testlog1.log') + >>> print key.ongoing_restore + True When the restore is finished, this value will be ``False`` and the expiry -date of the object will be non ``None``: +date of the object will be non ``None``:: ->>> key = bucket.get_key('logs/testlog1.log') ->>> print key.ongoing_restore -False ->>> print key.expiry_date -"Fri, 21 Dec 2012 00:00:00 GMT" + >>> key = bucket.get_key('logs/testlog1.log') + >>> print key.ongoing_restore + False + >>> print key.expiry_date + "Fri, 21 Dec 2012 00:00:00 GMT" .. note:: If there is no restore operation either in progress or completed, the ``ongoing_restore`` attribute will be ``None``. -Once the object is restored you can then download the contents: +Once the object is restored you can then download the contents:: ->>> key.get_contents_to_filename('testlog1.log') + >>> key.get_contents_to_filename('testlog1.log') diff --git a/docs/source/ses_tut.rst b/docs/source/ses_tut.rst index c71e8868..d19a4e36 100644 --- a/docs/source/ses_tut.rst +++ b/docs/source/ses_tut.rst @@ -15,18 +15,19 @@ Creating a Connection The first step in accessing SES is to create a connection to the service.
To do so, the most straight forward way is the following::
- >>> import boto
- >>> conn = boto.connect_ses(
+ >>> import boto.ses
+ >>> conn = boto.ses.connect_to_region(
+ 'us-west-2',
aws_access_key_id='<YOUR_AWS_KEY_ID>',
aws_secret_access_key='<YOUR_AWS_SECRET_KEY>')
>>> conn
- SESConnection:email.us-east-1.amazonaws.com
+ SESConnection:email.us-west-2.amazonaws.com
Bear in mind that if you have your credentials in boto config in your home
directory, the two keyword arguments in the call above are not needed. More
details on configuration can be fond in :doc:`boto_config_tut`.
-The :py:func:`boto.connect_ses` functions returns a
+The :py:func:`boto.ses.connect_to_region` functions returns a
:py:class:`boto.ses.connection.SESConnection` instance, which is a the boto API
for working with SES.
@@ -168,4 +169,4 @@ where we'll just show a short excerpt here:: ]
}
}
- }
\ No newline at end of file + }
diff --git a/docs/source/simpledb_tut.rst b/docs/source/simpledb_tut.rst index 39607260..98cabfe0 100644 --- a/docs/source/simpledb_tut.rst +++ b/docs/source/simpledb_tut.rst @@ -13,8 +13,11 @@ Creating a Connection The first step in accessing SimpleDB is to create a connection to the service. To do so, the most straight forward way is the following:: - >>> import boto - >>> conn = boto.connect_sdb(aws_access_key_id='<YOUR_AWS_KEY_ID>',aws_secret_access_key='<YOUR_AWS_SECRET_KEY>') + >>> import boto.sdb + >>> conn = boto.sdb.connect_to_region( + ... 'us-west-2', + ... aws_access_key_id='<YOUR_AWS_KEY_ID>', + ... aws_secret_access_key='<YOUR_AWS_SECRET_KEY>') >>> conn SDBConnection:sdb.amazonaws.com >>> diff --git a/docs/source/sqs_tut.rst b/docs/source/sqs_tut.rst index 9445de26..d4d69c98 100644 --- a/docs/source/sqs_tut.rst +++ b/docs/source/sqs_tut.rst @@ -15,12 +15,12 @@ The recommended method of doing this is as follows:: >>> import boto.sqs >>> conn = boto.sqs.connect_to_region( - ... "us-east-1", + ... "us-west-2", ... aws_access_key_id='<aws access key'>, ... aws_secret_access_key='<aws secret key>') At this point the variable conn will point to an SQSConnection object in the -US-EAST-1 region. Bear in mind that just as any other AWS service, SQS is +US-WEST-2 region. Bear in mind that just as any other AWS service, SQS is region-specific. In this example, the AWS access key and AWS secret key are passed in to the method explicitely. Alternatively, you can set the environment variables: @@ -31,7 +31,7 @@ variables: and then simply call:: >>> import boto.sqs - >>> conn = boto.sqs.connect_to_region("us-east-1") + >>> conn = boto.sqs.connect_to_region("us-west-2") In either case, conn will point to an SQSConnection object which we will use throughout the remainder of this tutorial. @@ -217,7 +217,7 @@ If I want to delete the entire queue, I would use: >>> conn.delete_queue(q) -However, and this is a good safe guard, this won't succeed unless the queue is empty. +This will delete the queue, even if there are still messages within the queue. Additional Information ---------------------- diff --git a/docs/source/vpc_tut.rst b/docs/source/vpc_tut.rst index ce26ead0..1244c4e1 100644 --- a/docs/source/vpc_tut.rst +++ b/docs/source/vpc_tut.rst @@ -97,4 +97,13 @@ Releasing an Elastic IP Attached to a VPC Instance -------------------------------------------------- >>> ec2.connection.release_address(None, 'eipalloc-35cf685d') ->>>
\ No newline at end of file +>>> + +To Get All VPN Connections +-------------------------- +>>> vpns = c.get_all_vpn_connections() +>>> vpns[0].id +u'vpn-12ef67bv' +>>> tunnels = vpns[0].tunnels +>>> tunnels +[VpnTunnel: 177.12.34.56, VpnTunnel: 177.12.34.57] |