Tag: amazon-web-services

Using LetsEncrypt on Amazon Linux

sslapexFor a number of years now I’ve been using LetsEncrypt to provide free SSL certificates for the Apex applications I provide. These certificates last for 90 days and are renewed automatically by a simple script on my server.

By the way – if you’re not already using https for your public-facing Apex applications, you should, okay – even if your site doesn’t have data entry.

Each LetsEncrypt certificate can cover multiple subdomains. They don’t currently support wildcard domains (e.g. *.example.com) but they are planning to add this next year (2018).

To install LetsEncrypt I ran the following on my Amazon Linux instance (note – this is my web server, not my database server):

cd /opt
git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt
./letsencrypt-auto -v --debug

(when prompted at the last step I typed “c” to cancel the subsequent steps)

It wasn’t easy at first because I got a number of errors which I’d google (or search the community forum) and eventually find reasonable answers. I’ve had to reinstall a number of times, as the OS is patched regularly and certbot is updated from time to time.

I use Apache to provide about a dozen virtual hosts and therefore the automated installation option didn’t work for me. Instead, I’ve got lines like these in each VirtualHost:

<VirtualHost *:443>
 ServerName subdomain.mydomain.com
 ServerAlias subdomain.mydomain.com
 SSLEngine on
 SSLCertificateFile "/etc/letsencrypt/live/mydomain.com/cert.pem"
 SSLCertificateKeyFile "/etc/letsencrypt/live/mydomain.com/privkey.pem"
 SSLCertificateChainFile "/etc/letsencrypt/live/mydomain.com/chain.pem"
 ...
</VirtualHost>

To register a certificate I used the following command as root (all one line):

/opt/letsencrypt/letsencrypt-auto
certonly --webroot -w /var/www/html
-d mydomain.com,www.mydomain.com,sub1.mydomain.com,sub2.mydomain.com

This generates all the keys and certificates and stores them locally. No private keys ever leave the server. This command is using SAN to combine multiple subdomains in one certificate. I run this command again separately for each domain.

To renew all my certificates I run the following command as root:

/opt/letsencrypt/letsencrypt-auto renew -n --no-self-upgrade
service httpd restart

This will automatically skip any certificates that are not yet due to expire. I’ve put the above script in a file which is run by cron on a monthly basis.

0 20 1 * * /path-to-script/renewall.sh

To get usage info on the options:

/opt/letsencrypt/letsencrypt-auto --help

Since it’s free, one cannot expect support from LetsEncrypt directly if there are issues; however, there is an active LetsEncrypt support community which can be helpful at times.

But it’s certainly made a big difference to my bottom line, and provided a bit of peace-of-mind to my users.

Related

“Automate Amazon S3 Storage with Alexandria”

In Perth this morning, at a breakfast courtesy of the local AUSOUG, I spoke about using the Alexandria PL/SQL Library to automate various tasks with Amazon’s Simple Storage (S3) service. If you haven’t used Amazon Web Services before, or haven’t looked at Alexandria yet, and you enjoy discovering new capabilities with PL/SQL I think you’ll find this interesting.

The powerpoint slides and demo script are now available on my Presentations page.

In late October I’ll be speaking at the 20:20 Foresight Perth Conference – more details later.

Alexandria: May Cause Addiction

Ever since I downloaded the Alexandria PL/SQL library, I haven’t been able to put it down. Just recently I decided I wanted to serve up a whole lot of media files directly from Amazon’s S3 simple storage service, instead of serving them from within my EC2 (elastic compute) instance. They were just wasting my linux server’s time responding to http requests.

So, I quickly wrote the following code to transfer them:

DECLARE
  l_blob BLOB;
BEGIN
  /* initialise my AWS session */
  ALEX.amazon_aws_auth_pkg.init
    ( 'yyy-my-aws-id-yyy'
    , 'xxx-not-telling-xxx'
    , p_gmt_offset => -8);
  FOR rec IN (
    SELECT id, filename, mime_type, location
    FROM myfiles
    WHERE location = 'http://myserver/media/'
  ) LOOP
    /* read the file from its current location */
    l_blob := ALEX.http_util_pkg.get_blob_from_url
      (rec.location || rec.filename);
    IF DBMS_LOB.getLength(l_blob) > 0 THEN
      /* upload the file to Amazon S3 */
      ALEX.amazon_aws_s3_pkg.new_object
        ( 'mybucket'
        , rec.filename
        , l_blob
        , rec.mime_type
        , ALEX.amazon_aws_s3_pkg.g_acl_public_read);
      UPDATE myfiles
      SET location = 'https://mybucket.s3-ap-southeast-1.amazonaws.com/'
      WHERE id = rec.id;
      COMMIT;
    END IF;
  END LOOP;
END;

After a short while, all the files had been copied across to my bucket on S3, and my table updated so that my web site now points people’s browsers to the new location for those files.

Of course, I could have used UTL_FILE to read the files from disk, but then I’d have to first create a directory, and write a loop to read the file in chunks into the BLOB. Why bother with all that when I can just call http_util_pkg.get_blog_from_url and get it all in one go?

That’s the trouble with powerful utilities like Alexandria: they’re too easy to use, make tasks like this trivial, and you start finding all sorts of uses for them. All of a sudden, Alexandria is your hammer, and the world is full of nails.

See also: this quick intro to using Alexandria’s API for Amazon S3.

New apex web site: www.myhomecontents.com.au

I’ve just opened my new web site, built entirely in Oracle Application Express, and announced it on my other blog:

www.myhomecontents.com.au has been built with just one simple purpose – to help you keep track of your home contents. It’s in “beta” status at the moment, so I’m keen to get as many people as possible to try it out to flesh out any bugs.”

Please check it out, and let me know what you think!