Most websites are static HTML pages. However, people continue to spin up infrastructure based on a web server, which could run Apache or Nginx. But, you want to keep costs low. You can use DigitalOcean, but I’m an AWS guy, so we’ll do things the AWS way.
So let us talk about the goal. The more you focus on Cloud-based infrastructure, the more you understand it’s not about the hardware, instead it is the software or code. And being able to version control your system is fantastic. This goes along with being able to version your infrastructure, as your hardware is no longer a lovely little pet you maintain, it has become the cattle that you slaughter and eat. Meaning you don’t care about it; if it becomes an issue, you delete it and move on.
If you plan on doing a Blog site, your best bet is to pay the small yearly cost to WordPress.com; they do a fantastic job. But, most websites are just posting simple information. Think of a restaurant. How often does that experience change? The menu might change weekly, monthly. But contact information, location, etc.. does not. This is where a static hosted web page is perfect.
Now since we all love Google, I did a quick search using these keywords:
automate static S3 GitHub
AWS is very powerful, and someone else already made a blog post covering what I wanted to explain:
“Automate static website deployment from Github to S3 using AWS CodePipeline – sithum DevOps“
So you follow that post, it will do everything you need – if you’re using GitHub. It shows how to create a code pipeline that will commit updates to S3 when you submit changes to your GitHub project. A couple of parts are left out, for example, SNS, Amazon Simple Notification Service. This is used to set up Email or SMS notices for your pipeline, i.e., success, fail, or manual intervention is needed if something happens to your pipeline. And, you need to have your S3 buckets already created. I’ll post another article going over what I would call “AWS 101”, which will outline security, billing, and alert notices to do first in your AWS Account.
Another little thing to add to this pipeline would be to invalidate CloudFront data(CloudFront is a CDN, that distributes your content from an Origin-yours being S3-to geographically located servers around the globe). Just because you updated S3 with newer code doesn’t mean you would be sending the most up-to-date information, that is, if you’re using CloudFront. With CloudFront, you would need to let the content expire on those geographic servers, usually 24 hours. Then a new pull from your Origin would be done, or you would process an invalidate request-which deletes that file(s) from the cache on the servers(however, this can be costly if you are continually invalidating single files all the time). But talking about CloudFront is for another post.
Also, keep in mind each time you process your pipeline, i.e., submit updates. This will push new content into S3. And, having an “active pipeline” does cost $1.00/Monthly. Now, this cost does not include the other service costs that might be associated with running the pipeline, i.e., transit data costs etc… So keep that in mind.