Frontend Masters Boost RSS Feed https://frontendmasters.com/blog Helping Your Journey to Senior Developer Sat, 12 Jul 2025 10:54:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 225069128 Deploy a Site with a Build Process & a Custom Domain Name https://frontendmasters.com/blog/deploy-a-site-with-a-build-process-a-custom-domain-name/ https://frontendmasters.com/blog/deploy-a-site-with-a-build-process-a-custom-domain-name/#respond Sat, 12 Jul 2025 10:54:26 +0000 https://frontendmasters.com/blog/?p=6028 What we’ve done so far in this series is look at the absolute easiest way to take some static files and turn them into a Real Website (one that anyone in the world can see). Then we took things one step further and did the very practical step of putting the code on GitHub, which was then able to update our Netlify-hosted site with changes.

Article Series

Now we’re in Part 3, and we’ll pick up where we left off once again, and do two more very practical things:

  1. Use a site-building tool (we’ll use Astro)
  2. Use a real domain name

Once we’re done, we’ll be doing what dare-I-say most websites are doing. While we’re keeping this very beginner-focused, here’s the general outline for how websites operate: use tools to create files that become websites, place those files in version control, and utilize services to host our site.

Adding a Build Process

Why?

Why slap a build process onto a simple site?

It’s true: you don’t always need a build process, and I think avoiding adding tools is a better lesson than adding them because you think you have to. So I want to be clear here that we’re doing it here because we’re learning. But here are some reasons why we might:

  • It can make content management easier. In our case, let’s build out some of the content with Markdown files, so that adding and changing content is potentially a bit easier and more flexible than editing one big HTML file.
  • The build process can help slot in additional tools for helping with things like performance, writing in other languages, or doing responsible things like running tests.

We’ll be adding Astro as the site-building tool for us, so it’s the tool that will be running the build process.

Incorporating

The trick here is to scaffold a new, bare-bones Astro site, then move the HTML/CSS/JavaScript assets from our existing project into place in the Astro project.

Astro is clear that the right way to get started is using the command line. So far in this series, we’ve avoided the command line, using the GitHub Desktop app to do Git work, where git is natively a command line tool. But we’ve got no choice here, so let’s do it. I hope this isn’t a showstopper for anyone, but I think it’s outside the scope of this series to explain the command line. Naturally, there is a great course right here: Complete Intro to Linux and the Command-Line. The good news is that any operating system will have a free and perfectly serviceable command line app for you to use (like Terminal.app on macOS), and we won’t need it for long. We just need to get it open, then copy/paste the command Astro says to run on their homepage:

A terminal window displaying a command prompt with the last login time and a command to create an Astro project using npm.

It’ll ask you a series of questions, where the default answer is likely fine, and you’ll end up with all the necessary files to run a basic Astro site.

Slightly tricky part here for an absolute beginner: It’s going to make the Astro site into a folder, and that folder might have a strange, random name (if you didn’t name it yourself during the scaffolding questions). So in the terminal, you’ll type cd [name-of-folder] to “move” into it (“cd” is “change directory”). From inside that folder, now you can type npm run dev and it will run the Astro site. This is a change from our previous entirely static site. Now, when we’re working on our Astro site, we need to run it like we just did.

We get lots of little niceties from working this way, like when we edit the code and save, the browser immediately updates to show the changes.

Now this looks absolutely nothing like our site, which is expected. We now need to move our HTML/CSS/JavaScript into the Astro-scaffolded files. These will be jobs like:

  • HTML needs to move into .astro files to take best advantage of Astro features. So it’s likey our simple setup will involve porting most of it to an src/pages/index.astro file.
  • Moving CSS and JavaScript assets into src/assets and linking them up how Astro likes to do it.

It might be useful to look at my Git Commit that does this conversion.

It’s essentially our job to review the Astro site’s structure and integrate our existing code into the Astro setup.

Taking Advantage of Astro Features

Our plan was to build out some of our content using Markdown files. We’re jamming in this concept because we’re learning and it’s interesting. But it’s also a fairly common need and project requirement. So let’s do it with the “work” section of our portfolio site.

We’ve got these six images here. Let’s flesh out that section and make it actually powered by six Markdown files that link up the image but also have actual information about the project we worked on. This is a portfolio after all!

This is a perfect situation for Astro’s content collections. We can define what we want a work item to be like from a data perspective, then make Markdown files for each item.

You can absolutely do all this work by hand. I might argue that you should. More than once. But I also don’t want to be ignorant to the AI revolution in how developers are working these days. I also think that fairly rote tasks like this are done usually quite well by AI agents. That’s particularly true here, as we’re doing something very basic and with-the-grain in a fairly popular framework with good open documentation.

I used an AI agent myself to do this job because I wanted to give it a whirl! I had just heard of Jules from Google so I gave that one a try, but there are so many other choices. I’ve used Cursor a bunch which just launched a web version of agents which seems interesting, for example.

I told Jules:

in index.astro, there is a div with class “work__container”. I want to turn that area into an Astro Collection. Each of those images should actually be a markdown file. That markdown file has more stuff in it like a title and description as well as the image.

I’m sure it would have been happy to take follow-up instructions and all that, but this single prompt did the job just fine, and it ended up as a PR (Pull Request) against the GitHub repo we set up.

Just a little hand-tweaking of that new Work.astro file, and we have a nice new section that will be easy to update in the future by simple editing Markdown files.

Updating Netlify

We need to tell Netlify that our site is different now! No longer is it entirely static files. It’s true that Astro makes totally static files that can essentially be served in the same way, but when you use a site-building tool like Astro, the approach is to have the build process run when you deploy the site. That might sound a little strange if you’re learning about this for the first time, but it’s true.

When Astro builds your site for you locally, it builds your website in a folder called dist. You can see that in the .gitignore file that came into existence when we scaffolded Astro, dist is in there, which means “do not track any of the files in that folder in Git”, meaning they don’t go to GitHub at all, and don’t go to Netlify. The reason for that is generally that it’s just noisy. The changes to those “built” files will occur on almost every commit, and it’s not particularly interesting to see those files change in Git. It’s interesting to see what you, the author, changed, not the changes to the built files. So, because Netlify doesn’t have them, it can just build them for itself.

We need to go into the Netlify settings for our project into Build & deploy > Continuous deployment > Build settings.

We update our “build command” to npm run build and the “publish directory” to dist.

Netlify is smart enough to do this itself when you add an Astro project from the get-go, but here we’re changing the site from totally static to Astro, so it’s our job to update it.

A new deployment from Netlify (which you can do from a new commit to GitHub or Deploys > Trigger Deploy in the Netlify dashboard) and we’re in business:

Adding a Real Domain Name

Right now I’ve got mycoolpersonalportfolio.netlify.app which is indeed a “real” domain name. But it’s just the free subdomain that Netlify gives you. It’s neat you can customize it, but it doesn’t have quite the professional feel that your own domain name would have. For example, my real website is at chriscoyier.net and that feels much better to me.

A domain like that is something you own. You have to buy it, and you can have it forever as long as you pay the renewal costs. In a sense, the domain name is almost more important than the website that’s on it since the content can and will change but the domain name won’t.

Netlify itself will help you buy a domain name. And honestly, it’s almost surely the easiest path forward here to do that, as they are incentivized to make it easy and work flawlessly. That’s fine, it’ll get the job done.

But personally, I like to keep the domains I own registered separately from the web host. Let’s say you want to leave Netlify hosting one day, wouldn’t that be weird to manage the domain at Netlify while dealing with the hosting somewhere else? It feels weird to me, like the incentives are now off.

I have most of my domains on GoDaddy, which is a big, popular choice, but I’ve heard good things about Porkbun, there is Cloudflare, and a million others.

I own coyier.dev and I’ve never done anything with it, so what I’ll do is set it up as the domain for this project.

Updating DNS

The trick is updating the DNS information for the domain name I own to what Netlify wants to host the site properly. In Netlify, I go to Domain Management and get to this area. At the same time, I’m logged into GoDaddy and find my way to the DNS nameservers area. I need to update the nameservers in GoDaddy to the ones Netlify tells me to use.

It’s likely to take a couple of hours for you to see this actually work. DNS is strange and mysterious though, requiring routers around the world to learn this new information, so it’s possible it takes 24 hours or more.

Once this process is done. The DNS is resolving, as they say, we’re done here!

We did it

We’ve done what we set out to do. We have a portfolio website for ourselves.

The code for it we got on GitHub, which is a great place for it. Just think, if we drop our computer into a lake, when we get a new computer, we can just pull the code down again from GitHub and away we go. We could even invite a friend to help us with it and our changes will merge together.

We’re using Astro to build the site, which does all sorts of useful things for us like making the best static website it can make. We’re taking advantage of it’s build process to manage the work area of the site so it’ll be easy to add and change things.

We have Netlify hosting the site for us, which makes the website a real website that anyone in the world can visit. Netlify even builds our website for us when we commit new code to GitHub. This keeps our GitHub repo nice and clean.

We have a real domain name that we’ve “pointed” at Netlify. This gives us a nice level of professionalism and control.

If you’ve used this as a guide to do this work, I’d love to hear from you. You’re well on your web to becoming a web professional. If you’re anything like me, you get a nice sense of satisfaction from this whole process. 💜

]]>
https://frontendmasters.com/blog/deploy-a-site-with-a-build-process-a-custom-domain-name/feed/ 0 6028
The Simplest Way to Deploy Your Own Updatable Portfolio Site https://frontendmasters.com/blog/the-simplest-way-to-deploy/ https://frontendmasters.com/blog/the-simplest-way-to-deploy/#respond Thu, 29 May 2025 14:38:30 +0000 https://frontendmasters.com/blog/?p=5784 Let’s say you’ve got a set of static files (HTML, CSS, JavaScript, images, etc) that build your website. Perhaps it’s your portfolio website, thanks to having taken Jen Kramer’s course Getting Started with CSS where you literally build a portfolio, for example, but it could be anything.

A file directory labeled 'portfolio' showing three files: index.html, script.js, and style.css, with their sizes and types displayed.
Hey, what do you know, I’ve got some static files that make a nice personal portfolio page right here.

We’ve covered the very fastest way to get those files turned into a deployed website on the internet already, using Netlify’s tool. That totally works, but we can take things a little further to make things easier on our future selves.

Websites tend to need to be updated. Technically, you can keep using that tool to drag-and-drop your entire site again. But since we’re here to learn to be a better developer, let’s do better than that. We’re going to start using Git and GitHub. Let’s do the steps.

Article Series

Git?

Git is the name of the technology we’ll use, which is what they call a VCS or Version Control System. We’ll put our files in Git, then when we change anything, Git will know about it. Git allows us to “commit” those changes, which gives us a history of what changed. That alone is a great feature. But crucially, Git allows us to work with others (they can “pull” those commits) and most importantly for us today: connect other apps to Git. For example, when changes are pushed up, “deploy” the changes to the live website.

1) Make sure you have a GitHub account

There is a 99.99% chance you’ll need/want a GitHub account in your developer career. If you don’t already have one, get one:

GitHub sign-up page showing fields for email, password, username, country/region, and email preferences, with a dark background and cartoonish character icons.
The accessibility of this page just got improved while retaining a great design, good job team.

2) Get the GitHub Desktop App

We’re baby-steppin’ here, and I think it will be easier for us to use the official GitHub app than it will be to use what developers call “the command line” to work with Git (but someday you can level up).

Screenshot of the GitHub Desktop application showcasing the interface with an emphasis on simplifying the Git workflow.
It’s free.

Honestly, I’ve been using Git for a hot long while and I still use GUI apps like this (literally this) to work with Git as I prefer the visual nature of it. This is true of many of the talented developers I work with, who are are very capable of command line usage as well.

3) Make a Repo

“Repo” is just short for “repository”. You could make one locally and push it up to GitHub as a second step, but for whatever reason I prefer making it on GitHub, “pulling it down” and going from there.

Screenshot of the GitHub interface for creating a new repository, featuring fields for repository name, description, and options for initializing and licensing.
You likely don’t need anything but the defaults here.

4) Pull the Repo from GitHub to your Local Computer

One reason to use the GitHub Desktop app we downloaded is that the GitHub website is nicely integrated with it, giving us a quick button to click:

Screenshot of a GitHub repository page showcasing the options for setting up the repository in GitHub Desktop.
Screenshot of the GitHub Desktop app showing the 'Clone a Repository' dialog, with fields for the repository URL and local path.
I have a folder just called “GitHub” I put all my repos in.

This folder will essentially be empty. (In truth, it has a .git folder inside of it, but most operating systems and code editors hide folders and files that start with a . by default so you don’t see it while browsing files.)

5) Put your static files in the folder you just pulled down

Now you can drag your static files into that “empty” folder that is your repo. When the files are in there, GitHub Desktop (and really, Git itself) will “see” those files as changes to the repo. So you’ll see this:

Screen displaying a GitHub Desktop application with a portfolio repository, showing files index.html, script.js, and style.css marked as changed, along with a code window for editing index.html.

(That .DS_Store file is just an awkward thing macOS does. Try right-clicking that file and ignoring it and seeing what that does.)

6) Push your static files to the Repo

All those files are now selected (see the checkmarks). Type in a commit message (where it says “Summary (required)” in GitHub Desktop) and then clicking the blue Commit button.

After committing, you will no longer see any local changes, but you’ll see that “commit” you just did under the History tab. Your job now is to click the Publish branch button on in the upper right.

Screenshot of the GitHub Desktop application showing the first commit in a repository named 'my-portfolio', with a list of changed files including index.html, script.js, and style.css.

After you’ve done that, you’ll see the files you “pushed” up right on GitHub.com in your repo URL:

Screenshot of a GitHub repository showing a personal portfolio project with files including index.html, script.js, and style.css, along with commit history and repository details.

7) Now that your website files are on GitHub, we can deploy them to a live website

Just so you’re aware, GitHub has a product called GitHub Pages where we could just make GitHub itself the home of your website. Jen demonstrates this in her course.

We’re also fans of Netlify here and generally think that’s the best option for projects like this, so sign up for a free account at Netlify if you don’t have one.

8) Make a New Projects on Netlify

One you’re in, go to Projects and add a new one by selecting Import an existing project.

Then select GitHub as that’s where our project lives.

You may need to grant Netlify permissions on GitHub:

Once Netlify is authorized, you’ll see a list of your repos. Find the porfolio one we’re working with and click it.

Now you pick the URL for it, which for now will be your-chosen-name.netlify.app. You don’t need to change any other settings, so scroll down and Deploy it.

9) Your Website will Go Live

Netlify will work on deploying it, which should be pretty fast probably. Maybe a few minutes at worst.

Then it will be live!

You can click that green link like you see above to see the website.

You can share that URL with anyone in the world and they’ll be able to see it. That’s the power of the world wide web. It’s awesome. Here’s a view of the files I uploaded:

10) Make Some Changes

Another wonderful part of working on websites is you can easily change them at any time. That’s part of why we’re working with Git, because we can push up those changes and keep track of them. We can also efficiently deploy only changed files and such.

If I change the files locally, the GitHub Desktop app will show me what has changed. I can check out those changes, confirming it’s exactly as I want, then type in a commit message and commit them, then click Push origin to both push the changes to GitHub and deploy the site on Netlify.

Screenshot of a GitHub Desktop interface showing changes in HTML and CSS files, highlighting modifications to the text 'Frontend Developer' to 'Front-End Dev' with a commit message field.
Screenshot of a portfolio website featuring the text 'Hi, I'm Chris Front-End Dev' with a pink button labeled 'Contact' and navigation links.

You really are a web designer and front-end developer now!

Next time we’ll take things just a smidge further, adding in a tool to help us build slightly more complex websites, which will make more clear why we’re using Netlify. And we’ll use a “real” domain name entirely of our own.


Article Series

]]>
https://frontendmasters.com/blog/the-simplest-way-to-deploy/feed/ 0 5784
Netlify Free Plan https://frontendmasters.com/blog/netlify-free-plan/ https://frontendmasters.com/blog/netlify-free-plan/#respond Thu, 09 Jan 2025 17:04:13 +0000 https://frontendmasters.com/blog/?p=4946 When we published our advice on the simplest and best way to take some static local files and make a proper online website out of them, we recommended Netlify. That holds true. But there is some trepidation, as once in a while you’d hear a horror story about usage blowing up unexpectedly and a user being stuck with a big bill. This is a problem not isolated to Netlify, but happened there too. That’s a scary prospect for anyone, as unlikely as it may be. So I think it’s notable that Netlify announced a free plan which is free forever and just has usage limits hard baked into it rather than the pay-to-scale model on other plans. That makes it safe: you’ll never get an unexpected bill.

]]>
https://frontendmasters.com/blog/netlify-free-plan/feed/ 0 4946
Introducing Fly.io https://frontendmasters.com/blog/introducing-fly-io/ https://frontendmasters.com/blog/introducing-fly-io/#comments Thu, 12 Dec 2024 15:18:54 +0000 https://frontendmasters.com/blog/?p=4742 Fly.io is an increasingly popular infrastructure platform. Fly is a place to deploy your applications, similar to Vercel or Netlify, but with some different tradeoffs.

This post will introduce the platform, show how to deploy web apps, stand up databases, and some other fun things. If you leave here wanting to learn more, the docs are here and are outstanding.

What is Fly?

Where platforms like Vercel and Netlify run your app on serverless functions which spin up and die off as needed (typically running on AWS Lambda), Fly runs your machines on actual VM’s, running in their infrastructure. These VMs can be configured to scale up as your app’s traffic grows, just like with serverless functions. But as the continuously run, there is no cold start issues. That said, if you’re on a budget, or your app isn’t that important (or both) you can also configure Fly to scale your app down to zero machines when traffic dies. You’ll be billed essentially nothing during those periods of inactivity, though your users will see a cold start time if they’re the first to hit your app during an inactive period.

To be perfectly frank, the cold start problem has been historically exaggerated, so please don’t pick a platform just to avoid cold starts.

Why VMs?

You might be wondering why, if cold starts aren’t a big deal in practice, one should care about Fly using VMs instead of cloud functions. For me there’s two reasons: the ability to execute long-running processes, and the ability to run anything that will run in a Docker image. Let’s dive into both.

The ability to handle long-running processes greatly expands the range of apps Fly can run. They have turn-key solutions for Phoenix LiveView, Laravel, Django, Postgres, and lots more. Anything you ship on Fly will be via a Dockerfile (don’t worry, they’ll help you generate them). That means anything you can put into a Dockerfile, can be run by Fly. If there’s a niche database you’ve been wanting to try (Neo4J, CouchDB, etc), just stand one up via a Dockerfile (and both of those DBs have official images), and you’re good to go. New databases, new languages, new anything: if there’s something you’ve been wanting to try, you can run it on Fly if you can containerize it; and anything can be containerized.

But… I don’t know Docker

Don’t worry, Fly will, as you’re about to see, help you scaffold a Dockerfile from any common app framework. We’ll take a quick look at what’s generated, and explain the high points.

That said, Docker is one of the most valuable tools for a new engineer to get familiar with, so if Fly motivates you to learn more, so much the better!

If you’d like to go deeper on Docker, our course Complete Intro to Containers from Brian Holt is fantastic.

Let’s launch an app!

Let’s ship something. We’ll create a brand new Next.js app, using the standard scaffolding here.

We’ll create an app, run npm i and then npm run dev and verify that it works.

screenshot of a running Next.js app

Now let’s deploy it to Fly. If you haven’t already, install the Fly CLI, and sign up for an account. Instructions can be found in the first few steps of the quick start guide.

To deploy an app on Fly, you need to containerize your app. We could manually piece together a valid Dockerfile that would run our Next app, and then run fly deploy. But that’s a tedious process. Thankfully Fly has made life easier for us. Instead, we can just run fly launch from our app’s root directory.

Fly easily detected Next.js, and then made some best guesses as to deployment settings. It opted for the third cheapest deployment option. Here’s Fly’s full pricing information. Fly let’s us accept these defaults, or tweak them. Let’s hit yes to tweak. We should be taken to the fly.io site, where our app is in the process of being set up.

For fun, let’s switch to the cheapest option, and change the region to Virginia (what AWS would call us-east-1).

Hit confirm, and return to your command line. It should finish setting everything up, which should look like this, in part.

If we head over to our Fly dashboard, we should see something like this:

We can then click that app and see the app’s details

And lastly, we can go to the URL listed, and see the app actually running!

Looking closer

There’s a number of files that Fly created for us. The two most important are the Dockerfile, and fly.toml. Let’s take a look at each. We’ll start with the Dockerfile.

# syntax = docker/dockerfile:1

# Adjust NODE_VERSION as desired
ARG NODE_VERSION=20.18.1
FROM node:${NODE_VERSION}-slim as base

LABEL fly_launch_runtime="Next.js"

# Next.js app lives here
WORKDIR /app

# Set production environment
ENV NODE_ENV="production"

# Throw-away build stage to reduce size of final image
FROM base as build

# Install packages needed to build node modules
RUN apt-get update -qq && \
    apt-get install --no-install-recommends -y build-essential node-gyp pkg-config python-is-python3

# Install node modules
COPY package-lock.json package.json ./
RUN npm ci --include=dev

# Copy application code
COPY . .

# Build application
RUN npm run build

# Remove development dependencies
RUN npm prune --omit=dev


# Final stage for app image
FROM base

# Copy built application
COPY --from=build /app /app

# Start the server by default, this can be overwritten at runtime
EXPOSE 3000
CMD [ "npm", "run", "start" ]

A Quick Detour to Understand Docker

Docker is a book unto its own, but as an extremely quick intro: Docker allows us to package our app into an “image.” Containers allow you to start with an entire operating system (almost always a minimal Linux distro), and allow you to do whatever you want with it. Docker then packages whatever you create, and allows it to be run. The Docker image is completely self-contained. You choose the whatever goes into it, from the base operating system, down to whatever you install into the image. Again, they’re self-contained.

Now let’s take a quick tour of the important pieces of our Dockerfile.

After some comments and labels, we find what will always be present at the top of a Dockerfile: the FROM command.

FROM node:${NODE_VERSION}-slim as base

This tells us the base of the image. We could start with any random Linux distro, and then install Node and npm, but unsurprisingly there’s already an officially maintained Node image: there will almost always be officially maintained Docker images for almost any technology. In fact, there’s many different Node images to choose from, many with different underlying base Linux distro’s.

There’s a LABEL that’s added, likely for use with Fly. Then we set the working directory in our image.

WORKDIR /app

We copy the package.json and lockfiles.

# Install node modules
COPY package-lock.json package.json ./

Then run npm i (but in our Docker image):

RUN npm ci --include=dev

Then we copy the rest of the application code:

# Copy application code
COPY . .

Hopefully you get the point. We won’t go over every line, here. But hopefully the general idea is clear enough, and hopefully you’d feel comfortable tweaking this if you wanted to. Two last points though. See this part:

# Install packages needed to build node modules
RUN apt-get update -qq && \
    apt-get install --no-install-recommends -y build-essential node-gyp pkg-config python-is-python3

That tells the Linux package manager to install some things Fly thinks Next might need, but in actuality probably doesn’t. Don’t be surprised if these lines are absent when you read this, and try for yourself.

Lastly, if you were wondering why the package.json and lockfiles were copied, followed by npm install and then followed by copying the rest of the application code, the reason is (Docker) performance. Briefly, each line in the Dockerfile creates a “layer.” These layers can be cached and re-used if nothing has changed. If anything has changed, that invalidates the cache for that layer, and also all layers after it. So you’ll want to push your likely-to-change work as low as possible. Your application code will almost always change between deployments; the dependencies in your package.json will change much less frequently. So we do that install first, by itself, so it will be more likely to be cached, and speed up our builds.

I tried my best to provide the absolute minimal amount of a Docker intro to make this post make sense, without being overhwelming. I hope I’ve succeeded. If you’d like to learn more, there’s tons of books and YouTube videos, and even an entire course here on Frontend Masters.

Fly.toml

Now let’s take a peek at the fly.toml file.

# fly.toml app configuration file generated for next-fly-test on 2024-11-28T19:04:19-06:00
#
# See https://fly.io/docs/reference/configuration/ for information about how to use this file.
#

app = 'next-fly-test'
primary_region = 'iad'

[build]

[http_service]
  internal_port = 3000
  force_https = true
  auto_stop_machines = 'stop'
  auto_start_machines = true
  min_machines_running = 0
  processes = ['app']

[[vm]]
  size = 'shared-cpu-1x'

This is basically the config file for the Fly app. The options for this file are almost endless, and are documented here. The three most important lines are the next three.

auto_stop_machines = 'stop'

This tells Fly to automatically kill machines when they’re not needed, when traffic is low on our app.

auto_start_machines = true

The line above tells Fly to automatically spin up new machines when it detects it needs to do so, given your traffic. Lastly, this line

min_machines_running = 0

That line allows us to tell Fly to always keep a minimum number of machines running, no matter how minimal your current traffic is. Setting it to zero allows for no machines to be running, which means your next visitor will see a slow response as the first machine spins up.

You may have noticed above that Fly spun up two machines initially, even though there was no traffic at all. It does this by default to give your app a higher availability, that is, in case anything happens to the one machine, the other will (hopefully) still be up and running. If you don’t want or need this, you can prevent it by passing --ha=false when you run fly launch or fly deploy (or you can just kill one of the machines in the dashboard – Fly will not re-create it on subsequent deploys).

Machines won’t bill you if they’re not running

When a machine is not running, you’ll be billed essentially zero for it. You’ll just pay $0.15 per GB, per month, per machine (machines will usually have only one GB).

Adding a database

You can launch a Fly app anytime with just a Dockerfile. You could absolutely find an official Postgres Docker image and deploy from that. But it turns out Fly has this built in. Let’s run fly postgres create in a terminal, and see what happens

It’ll ask you for a name and a region, and then how serious of a Postgres setup you want. Once it’s done, it’ll show you something like this.

Fly postgres create

The connection string listed at the bottom can be used to connect to your db from within another Fly app (which you own). But to run database creation and migration scripts, and for local development you’ll need to connect to this db on your local machine. To do that, you can run this:

fly proxy 5432 -a <your app name>

Now you can connect via the same connection string on your local machine, but on localhost:5432 instead of flycast:5432.

Making your database publicly available

It’s not ideal, but if you want to make your Fly pg box publicly available, you can. You basically have to add a dedicated ipv4 address to it (at a cost of $2 per month), and then tweak your config.

Consider using a dedicated host for serious applications.

Fly’s built-in Postgres support is superb, but there’s some things you’ll have to manage yourself. If that’s not for you, Supabase is a fully managed pg host, and it’s also superb. Fly even has a service for creating Supabase db’s on Fly infra, for extra low latency. It’s currently only in public alpha, but it might be worth keeping an eye on.

Interlude

If you just want a nice place to deploy your apps, what we’ve covered will suffice for the vast majority of use cases. I could stop this post here, but I’d be remiss if I didn’t show some of the cooler things you can do with Fly. Please don’t let what follows be indicative of the complexity you’ll normally deal with. We’ll be putting together a cron job for running Postgres backups. In practice, you’ll just use a mature DB provider like Supabase or PlanetScale, which will handle things like this for you.

But sometimes it’s fun to tinker, especially for side projects. So let’s kick the tires a bit and see what we can come up with.

Having Fun

One of Fly’s greatest strengths is its flexibility. You give it a Dockerfile, and it’ll run it. To drive that point home, let’s conclude this post with a fun example.

As much as I love Fly, it makes me a little uneasy that my database is running isolated in some VM under my account. Accidents happen, and I’d want automatic backups. Why don’t we build a Docker image to do just that?

I’ll want to run a script, written in TypeScript, preferably without hating my life: Bun is ideal for this. I’ll also need to run the actual pg_dump command. So what should I build my Dockerfile from: the bun image, which would lack to pg utilities, or the pg base, which wouldn’t have bun installed. I could do either, and use the Linux package manager to install what I need. But really, there’s a simpler way: use a multi-stage Docker build. Let’s see the whole Dockerfile

FROM oven/bun:latest AS BUILDER

WORKDIR /app

COPY . .

RUN ["bun", "install"]
RUN ["bun", "build", "index.ts", "--compile", "--outfile", "run-pg_dump"]

FROM postgres:16.4

WORKDIR /app
COPY --from=BUILDER /app/run-pg_dump .
COPY --from=BUILDER /app/run-backup.sh .

RUN chmod +x ./run-backup.sh

CMD ["./run-backup.sh"]

We start with a Bun image. We run a bun install to tell Bun to install what we need: aws sdk’s and such. Then we tell Bun to compile our script into a standalone executable: yes, Bun can do that, and yes: it’s that easy.

FROM postgres:16.4

Tells Docker to start a new stage, from a new (Postgres) base.

WORKDIR /app
COPY --from=BUILDER /app/run-pg_dump .
COPY --from=BUILDER /app/run-backup.sh .

RUN chmod +x ./run-backup.sh

CMD ["./run-backup.sh"]

This drops into the /app folder from the prior step, and copies over the run-pg_dump file, which Bun compiled for us, and also copies over run-backup.sh. This is a shell script I wrote. It runs pg_dump a few times, to generate the files the Bun script (run-pg_dump) is expecting, and then calls it. Here’s what that file looks like:

<strong>#!/bin/sh</strong>

PG_URI_CLEANED=$(echo ${PG_URI} | sed -e 's/^"//' -e 's/"$//')

pg_dump ${PG_URI_CLEANED} -Fc > ./backup.dump

pg_dump ${PG_URI_CLEANED} -f ./backup.sql

./run-pg_dump

This unhinged line:

PG_URI_CLEANED=$(echo ${PG_URI} | sed -e 's/^"//' -e 's/"$//')

is something ChatGPT helped me write, to strip the double quotes from my connection string environment variable.

Lastly, if you’re curious about the index.ts file Bun compiled into a standalone executable, this is it:

import fs from "fs";
import path from "path";

import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3";

const numToDisplay = (num: number) => num.toString().padStart(2, "0");

const today = new Date();
const date = `${today.getFullYear()}/${numToDisplay(today.getMonth() + 1)}/${numToDisplay(today.getDate())}`;
const time = `${today.getHours()}-${numToDisplay(today.getMinutes())}-${numToDisplay(today.getSeconds())}`;
const filename = `${date}/${time}`;

const REGION = "us-east-1";
const dumpParams = {
  Bucket: "my-library-backups",
  Key: `${filename}.dump`,
  Body: fs.readFileSync(path.resolve(__dirname, "backup.dump")),
};
const sqlParams = {
  Bucket: "my-library-backups",
  Key: `${filename}.sql`,
  Body: fs.readFileSync(path.resolve(__dirname, "backup.sql")),
};

const s3 = new S3Client({
  region: REGION,
  credentials: {
    accessKeyId: process.env.AWS_ID!,
    secretAccessKey: process.env.AWS_SECRET!,
  },
});

s3.send(new PutObjectCommand(sqlParams))
  .then(() => {
    console.log("SQL Backup Uploaded!");
  })
  .catch(err => {
    console.log("Error: ", err);
  });

s3.send(new PutObjectCommand(dumpParams))
  .then(() => {
    console.log("Dump Backup Uploaded!");
  })
  .catch(err => {
    console.log("Error: ", err);
  });

I’m sure someone who’s actually good with Docker could come up with something better, but this works well enough.

To see this whole thing all together, in one place, you can see it in my GitHub.

Scheduling a custom job

We have a working, valid Docker image. How do we tell Fly to run it on an interval? Fly has a command just for that: fly machine run. In fact, it can take a schedule argument, to have Fly run it on an interval. Unfortunately, the options are horribly limited: only hourly, daily, and monthly. But, as a workaround you can run this command at different times: this will set up executions at whatever interval you selected, scheduled off of when you ran the command.

fly machine run . --schedule=daily

If you ran that command at noon, that will schedule a daily task that runs at noon every day. If you run that command again at 5pm, it will schedule a second task to run daily, at 5pm (without interfering with the first). Each job will have a dedicated machine, but will be idle when not running, which means it will cost you almost nothing; you’ll pay the normal $0.15 per month, per GB on the machine.

I hate this limitation in scheduling machines. In theory there’s a true cron job template here, but it’s not the simplest thing to look through.

Odds and ends

That was a lot. Let’s lighten things up a bit with some happy odds and ends, before we wrap up.

Custom domains

Fly makes it easy to add a custom domain to your app. You’ll just need to add the right records. Full instructions are here.

Secrets

You’ll probably have some secrets you want run in your app, in production. If you’re thinking you could just bundle a .env.prod file into your Docker image, yes, you could. But that’s considered a bad idea. Instead, leverage Fly’s secret management.

Learning More

This post started brushing up against some full-stack topics. If this sparked your interest, be sure to check out the entire course on full-stack engineering here on Frontend Masters.

Wrapping Up

The truth is we’ve truly, barely scratched the surface of Fly. For simple side projects what we’ve covered here is probably more than you’d need. But Fly also has power tools available for advanced use cases. The sky’s the limit!

Fly.io is a wonderful platform. It’s fun to work with, will scale to your application’s changing load, and is incredibly flexible. I urge you to give it a try for your next project.

]]>
https://frontendmasters.com/blog/introducing-fly-io/feed/ 1 4742