Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

Standup 07/23/2009: Timeouts with AWS-S3

Ask for Help

“When attempting to upload files with the aws-s3 gem I am receiving a lot of timeouts. This seems to happen with both small and large files. Has anyone run into this before?”

It was hypothesized that this could be the result of a slow internet connection and saturating the upload stream. Does anyone know of a fix for s3 timeouts?

  1. Matt Conway says:

    Retry loop is what I do

    # excute the given block, retying only when one of the given exceptions is raised
    def retry_on_failure(*exception_list)
    retry_count = 5
    rescue *exception_list => e
    if retry_count > 0
    retry_count -= 1
    puts “Exception, trying again #{retry_count} more times”
    puts “Too many exceptions…skipping”
    puts e
    puts e.backtrace.join(“n”) rescue nil

    S3_EXCEPTIONS = [AWS::S3::S3Exception, Errno::ECONNRESET, Errno::ETIMEDOUT]

    retry_on_failure(*S3_EXCEPTIONS) do
    puts “Copying #{obj.key}”
    obj.copy(obj.key, :dest_bucket => dest_bucket, :copy_acl => true)

  2. Alex Sharp says:

    Did you ever get this worked out? I have been wrestling with this issue for a couple of weeks. Any insight?

  3. Joseph Palermo says:


    We are almost sure the problem we were having was due to a saturated upstream connection. Ideally we would have found some way to throttle it back.

    The quick fix, which greatly reduced the timeouts, was to change the read_timeout on the S3 gem. There is no publicly exposed way of doing this, we had to monkey patch the AWS::S3::Connection::create_connection method. You can either alias_method_chain it, or just re-implement it and set http.read_timeout = 300 or some other high value.

Post a Comment

Your Information (Name required. Email address will not be displayed with comment.)

* Copy This Password *

* Type Or Paste Password Here *