Browsing Tag: Static

    web design

    Context And Variables In The Hugo Static Site Generator — Smashing Magazine

    02/18/2021

    About The Author

    Kristian does web development and writing as part of the team behind the Tower Git client. Based on an island in Southwest Finland, he enjoys running, reading …
    More about
    Kristian

    In this article, we take a look at the topic of context and variables in Hugo, a popular static site generator. You’ll understand concepts such as the global context, flow control, and variables in Hugo templates, as well as data flow from content files through templates to partials and base templates.

    In this article, we’ll take a close look at how context works in the Hugo static site generator. We’ll examine how data flows from content to templates, how certain constructs change what data is available, and how we can pass on this data to partials and base templates.

    This article is not an introduction to Hugo. You’ll probably get the most out of it if you have some experience with Hugo, as we won’t go over every concept from scratch, but rather focus on the main topic of context and variables. However, if you refer to the Hugo documentation throughout, you may well be able to follow along even without previous experience!

    We’ll study various concepts by building up an example page. Not every single file required for the example site will be covered in detail, but the complete project is available on GitHub. If you want to understand how the pieces fit together, that’s a good starting point. Please also note that we won’t cover how to set up a Hugo site or run the development server — instructions for running the example are in the repository.

    What Is A Static Site Generator?

    If the concept of static site generators is new to you, here’s a quick introduction! Static site generators are perhaps best described by comparing them to dynamic sites. A dynamic site like a CMS generally assembles a page from scratch for each visit, perhaps fetching data from a database and combining various templates to do so. In practice, the use of caching means the page is not regenerated quite so often, but for the purpose of this comparison, we can think of it that way. A dynamic site is well suited to dynamic content: content that changes often, content that’s presented in a lot of different configurations depending on input, and content that can be manipulated by the site visitor.

    In contrast, many sites rarely change and accept little input from visitors. A “help” section for an application, a list of articles or an eBook could be examples of such sites. In this case, it makes more sense to assemble the final pages once when the content changes, thereafter serving the same pages to every visitor until the content changes again.

    Dynamic sites have more flexibility, but place more demand on the server they’re running on. They can also be difficult to distribute geographically, especially if databases are involved. Static site generators can be hosted on any server capable of delivering static files, and are easy to distribute.

    A common solution today, which mixes these approaches, is the JAMstack. “JAM” stands for JavaScript, APIs and markup and describes the building blocks of a JAMstack application: a static site generator generates static files for delivery to the client, but the stack has a dynamic component in the form of JavaScript running on the client — this client component can then use APIs to provide dynamic functionality to the user.

    Hugo

    Hugo is a popular static site generator. It’s written in Go, and the fact that Go is a compiled programming language hints at some of Hugos benefits and drawbacks. For one, Hugo is very fast, meaning that it generates static sites very quickly. Of course, this has no bearing on how fast or slow the sites created using Hugo are for the end user, but for the developer, the fact that Hugo compiles even large sites in the blink of an eye is quite valuable.

    However, as Hugo is written in a compiled language, extending it is difficult. Some other site generators allow you to insert your own code — in languages like Ruby, Python or JavaScript — into the compilation process. To extend Hugo, you would need to add your code to Hugo itself and recompile it — otherwise, you’re stuck with the template functions Hugo comes with out-of-the-box.

    While it does provide a rich variety of functions, this fact can become limiting if the generation of your pages involves some complicated logic. As we found, having a site originally developed running on a dynamic platform, the cases where you’ve taken the ability to drop in your custom code for granted do tend to pile up.

    Our team maintains a variety of web sites relating to our main product, the Tower Git client, and we’ve recently looked at moving some of these over to a static site generator. One of our sites, the “Learn” site, looked like a particularly nice fit for a pilot project. This site contains a variety of free learning material including videos, eBooks and FAQs on Git, but also other tech topics.

    Its content is largely of a static nature, and whatever interactive features there are (like newsletter sign-ups) were already powered by JavaScript. At the end of 2020, we converted this site from our previous CMS to Hugo, and today it runs as a static site. Naturally, we learned a lot about Hugo during this process. This article is a way of sharing some of the things we learned.

    Our Example

    As this article grew out of our work on converting our pages to Hugo, it seems natural to put together a (very!) simplified hypothetical landing page as an example. Our main focus will be a reusable so-called “list” template.

    In short, Hugo will use a list template for any page that contains subpages. There’s more to Hugos template hierarchy than that, but you don’t have to implement every possible template. A single list template goes a long way. It will be used in any situation calling for a list template where no more specialized template is available.

    Potential use cases include a home page, a blog index or a list of FAQs. Our reusable list template will reside in layouts/_default/list.html in our project. Again, the rest of the files needed to compile our example are available on GitHub, where you can also get a better look at how the pieces fit together. The GitHub repository also comes with a single.html template — as the name suggests, this template is used for pages that do not have subpages, but act as single pieces of content in their own right.

    Now that we’ve set the stage and explained what it is we’ll be doing, let’s get started!

    The Context Or “The Dot”

    It all starts with the dot. In a Hugo template, the object . — “the dot” — refers to the current context. What does this mean? Every template rendered in Hugo has access to a set of data — its context. This is initially set to an object representing the page currently being rendered, including its content and some metadata. The context also includes site-wide variables like configuration options and information about the current environment. You’d access a field like the title of the current page using .Title and the version of Hugo being used through .Hugo.Version — in other words, you’re accessing fields of the . structure.

    Importantly, this context can change, making a reference like `.Title` above point at something else or even making it invalid. This happens, for example, as you loop over a collection of some kind using range, or as you **split templates into partials and base templates**. We’ll look at this in detail later!

    Hugo uses the Go “templates” package, so when we refer to Hugo templates in this article, we’re really talking about Go templates. Hugo does add a lot template functions not available in standard Go templates.

    In my opinion, the context and the possibility to rebind it is one of Hugos best features. To me, it makes a lot of sense to always have “the dot” represent whatever object is the main focus of my template at a certain point, rebinding it as necessary as I go along. Of course, it’s possible to get yourself into a tangled mess as well, but I’ve been happy with it so far, to the extent that I quickly started missing it in any other static site generator I looked at.

    With this, we’re ready to look out at the humble starting point of our example — the template below, residing in the location layouts/_default/list.html in our project:

    <html>
      <head>
        <title>{{ .Title }} | {{ .Site.Title }}</title>
        <link rel="stylesheet" href="http://www.smashingmagazine.com/css/style.css">
      </head>
      <body>
        <nav>
          <a class="logo" href="{{ "/" | relURL }}">
            <img src="http://www.smashingmagazine.com/img/tower-logo.svg">
            <img src="/img/tower-claim.svg">
          </a>
          <ul>
            <li><a href="/">Home</a></li>
          </ul>
        </nav>
        <section class="content">
          <div class="container">
            <h1>{{ .Title }}</h1>
            {{ .Content }}
          </div>
        </section>
      </body>
    </html>
    

    Most of the template consists of a bare-bones HTML structure, with a stylesheet link, a menu for navigation and some extra elements and classes used for styling. The interesting stuff is between the curly braces, which signal Hugo to step in and do its magic, replacing whatever is between the braces with the result of evaluating some expression and potentially manipulating the context as well.

    As you may be able to guess, {{ .Title }} in the title tag refers to the title of the current page, while {{ .Site.Title }} refers to the title for the whole site, set in the Hugo configuration. A tag like {{ .Title }} simply tells Hugo to replace that tag with the contents of the field Title in the current context.

    So, we’ve accessed some data belonging to the page in a template. Where does this data come from? That’s the topic of the following section.

    Content And Front Matter

    Some of the variables available in the context are automatically provided by Hugo. Others are defined by us, mainly in content files. There are also other sources of data like configuration files, environment variables, data files and even APIs. In this article our focus will be on content files as the source of data.

    In general, a single content file represents a single page. A typical content file includes the main content of that page but also metadata about the page, like its title or the date it was created. Hugo supports several formats both for the main content and the metadata. In this article we’ll go with perhaps the most common combination: the content is provided as Markdown in a file containing the metadata as YAML front matter.

    In practice, that means the content file starts with a section delimited by a line containing three dashes at each end. This section constitutes the front matter, and here metadata is defined using a key: value syntax (As we’ll see soon, YAML supports more elaborate data structures too). The front matter is followed by the actual content, specified using the Markdown markup language.

    Let’s make things more concrete by looking at an example. Here’s a very simple content file with one front matter field and one paragraph of content:

    ---
    title: Home
    ---
    
    Home page of the Tower Git client. Over 100,000 developers and designers use Tower to be more productive!
    

    (This file resides at content/_index.md in our project, with _index.md denoting the content file for a page that has subpages. Again, the GitHub repository makes it clear where which file is supposed to go.)

    Rendered using the template from earlier, along with some styles and peripheral files (all found on GitHub), the result looks like this:

    (Large preview)

    You may wonder whether the field names in the front matter of our content file are predetermined, or whether we can add any field we like. The answer is “both”. There is a list of predefined fields, but we can also add any other field we can come up with. However, these fields are accessed a bit differently in the template. While a predefined field like title is accessed simply as .Title, a custom field like author is accessed using .Params.author.

    (For a quick reference on the predefined fields, along with things like functions, function parameters and page variables, see our own Hugo cheat sheet!)

    The .Content variable, used to access the main content from the content file in your template, is special. Hugo has a “shortcode” feature allowing you to use some extra tags in your Markdown content. You can also define your own. Unfortunately, these shortcodes will only work through the .Content variable — while you can run any other piece of data through a Markdown filter, this will not handle the shortcodes in the content.

    A note here about undefined variables: accessing a predefined field like .Date always works, even though you haven’t set it — an empty value will be returned in this case. Accessing an undefined custom field, like .Params.thisHasNotBeenSet, also works, returning an empty value. However, accessing a non-predefined top-level field like .thisDoesNotExist will prevent the site from compiling.

    As indicated by .Params.author as well as .Hugo.version and .Site.title earlier, chained invocations can be used to access a field nested in some other data structure. We can define such structures in our front matter. Let’s look at an example, where we define a map, or dictionary, specifying some properties for a banner on the page in our content file. Here is the updated content/_index.md:

    ---
    title: Home
    banner:
      headline: Try Tower For Free!
      subline: Download our trial to try Tower for 30 days
    ---
    
    Home page of the Tower Git client. Over 100,000 developers and designers use Tower to be more productive!
    

    Now, let’s add a banner to our template, referring to the banner data using .Params the way described above:

    <html>
      ...
      <body>
        ...
        <aside>
          <h2>{{ .Params.banner.headline }}</h2>
          <p>{{ .Params.banner.subline}}</p>
        </aside>
      </body>
    </html>
    

    Here’s what our site looks like now:

    (Large preview)

    All right! At the moment, we’re accessing fields of the default context without any issues. However, as mentioned earlier, this context is not fixed, but can change. Let’s look at how that might happen.

    Flow Control

    Flow control statements are an important part of a templating language, allowing you do different things depending on the value of variables, loop through data and more. Hugo templates provide the expected set of constructs, including if/else for conditional logic, and range for looping. Here, we will not cover flow control in Hugo in general (for more on this, see the documentation), but focus on how these statements affect the context. In this case, the most interesting statements are with and range.

    Let’s start with with. This statement checks if some expression has a “non-empty” value, and, if it has, rebinds the context to refer to the value of that expression. An end tag indicates the point where the influence of the with statement stops, and the context is rebound to whatever it was before. The Hugo documentation defines a non-empty value as false, 0, and any zero-length array, slice, map or string.

    Currently, our list template is not doing much listing at all. It might make sense for a list template to actually feature some of its subpages in some way. This gives us a perfect opportunity for examples of our flow control statements.

    Perhaps we want to display some featured content at the top of our page. This could be any piece of content — a blog post, a help article or a recipe, for example. Right now, let’s say our Tower example site has some pages highlighting its features, use-cases, a help page, a blog page, and a “learning platform” page. These are all located in the content/ directory. We configure which piece of content to feature by adding a field in the content file for our home page, content/_index.md. The page is referred to by its path, assuming the content directory as root, like so:

    ---
    title: Home
    banner:
      headline: Try Tower For Free!
      subline: Download our trial to try Tower for 30 days without limitations
    featured: /features.md
    ...
    ---
    ...
    

    Next, our list template has to be modified to display this piece of content. Hugo has a template function, .GetPage, which will allow us to refer to page objects other than the one we’re currently rendering. Recall how the context, ., was initially bound to an object representing the page being rendered? Using .GetPage and with, we can temporarily rebind the context to another page, referring to the fields of that page when displaying our featured content:

    <nav>
      ...
    </nav>
    <section class="featured">
      <div class="container">
        {{ with .GetPage .Params.featured }}
          <article>
            <h2>{{ .Title }}</h2>
            {{ .Summary }}
            <p><a href="{{ .Permalink }}">Read more →</a></p>
          </article>
        {{ end }}
      </div>
    </section>
    

    Here, {{ .Title }}, {{ .Summary }} and {{ .Permalink }} between the with and the end tags refer to those fields in the featured page, and not the main one being rendered.

    In addition to having a featured piece of content, let’s list a few more pieces of content further down. Just like the featured content, the listed pieces of content will be defined in content/_index.md, the content file for our home page. We’ll add a list of content paths to our front matter like this (in this case also specifying the section headline):

    ---
    ...
    listing_headline: Featured Pages
    listing:
      - /help.md
      - /use-cases.md
      - /blog/_index.md
      - /learn.md
    ---
    

    The reason that the blog page has its own directory and an _index.md file is that the blog will have subpages of its own — blog posts.

    To display this list in our template, we’ll use range. Unsurprisingly, this statement will loop over a list, but it will also rebind the context to each element of the list in turn. This is very convenient for our content list.

    Note that, from the perspective of Hugo, “listing” only contains some strings. For each iteration of the “range” loop, the context will be bound to one of those strings. To get access to the actual page object, we supply its path string (now the value of .) as an argument to .GetPage. Then, we’ll use the with statement again to rebind the context to the listed page object rather than its path string. Now, it’s easy to display the content of each listed page in turn:

    <aside>
      ...
    </aside>
    <section class="listing">
      <div class="container">
        <h1>{{ .Params.listing_headline }}</h1>
        <div>
          {{ range .Params.listing }}
            {{ with $.GetPage . }}
              <article>
                <h2>{{ .Title }}</h2>
                {{ .Summary }}
                <p><a href="{{ .Permalink }}">Read more →</a></p>
              </article>
            {{ end }}
          {{ end }}
        </div>
      </div>
    </section>
    

    Here’s what the site looks like at this point:

    (Large preview)

    But hold on, there’s something weird in the template above — rather than calling .GetPage, we’re calling $.GetPage. Can you guess why .GetPage wouldn’t work?

    The notation .GetPage indicates that the GetPage function is a method of the current context. Indeed, in the default context, there is such a method, but we’ve just gone ahead and changed the context! When we call .GetPage, the context is bound to a string, which does not have that method. The way we work around this is the topic of the next section.

    The Global Context

    As seen above, there are situations where the context has been changed, but we’d still like to access the original context. Here, it’s because we want to call a method existing in the original context — another common situation is when we want to access some property of the main page being rendered. No problem, there’s an easy way to do this.

    In a Hugo template, $, known as the global context, refers to the original value of the context — the context as it was when template processing started. In the previous section, it was used to call the .GetPage method even though we had rebound the context to a string. Now, we’ll also use it to access a field of the page being rendered.

    At the beginning of this article, I mentioned that our list template is reusable. So far, we’ve only used it for the home page, rendering a content file located at content/_index.md. In the example repository, there is another content file which will be rendered using this template: content/blog/_index.md. This is an index page for the blog, and just like the home page it shows a featured piece of content and lists a few more — blog posts, in this case.

    Now, let’s say we want to show listed content slightly differently on the home page — not enough to warrant a separate template, but something we can do with a conditional statement in the template itself. As an example, we’ll display the listed content in a two-column grid, as opposed to a single-column list, if we detect that we’re rendering the home page.

    Hugo comes with a page method, .IsHome, which provides exactly the functionality we need. We’ll handle the actual change in presentation by adding a class to the individual pieces of content when we find we’re on the home page, allowing our CSS file do the rest.

    We could, of course, add the class to the body element or some containing element instead, but that wouldn’t enable as good a demonstration of the global context. By the time we write the HTML for the listed piece of content, . refers to the listed page, but IsHome needs to be called on the main page being rendered. The global context comes to our rescue:

    <section class="listing">
      <div class="container">
        <h1>{{ .Params.listing_headline }}</h1>
        <div>
          {{ range .Params.listing }}
            {{ with $.GetPage . }}
              <article{{ if $.IsHome }} class="home"{{ end }}>
                <h2>{{ .Title }}</h2>
                {{ .Summary }}
                <p><a href="{{ .Permalink }}">Read more →</a></p>
              </article>
            {{ end }}
          {{ end }}
        </div>
      </div>
    </section>
    

    The blog index looks just like our home page did, albeit with different content:

    (Large preview)

    …but our home page now displays its featured content in a grid:

    (Large preview)

    Partial Templates

    When building up a real website, it quickly becomes useful to split your templates into parts. Perhaps you want to reuse some particular part of a template, or perhaps you just want to split a huge, unwieldy template into coherent pieces. For this purpose, Hugo’s partial templates are the way to go.

    From a context perspective, the important thing here is that when we include a partial template, we explicitly pass it the context we want to make available to it. A common practice is to pass in the context as it is when the partial is included, like this: {{ partial "my/partial.html" . }}. If the dot here refers to the page being rendered, that’s what will be passed to the partial; if the context has been rebound to something else, that’s what’s passed down.

    You can, of course, rebind the context in partial templates just like in normal ones. In this case, the global context, $, refers to the original context passed to the partial, not the main page being rendered (unless that’s what was passed in).

    If we want a partial template to have access to some particular piece of data, we might run into problems if we pass only this to the partial. Recall our problem earlier with accessing page methods after rebinding the context? The same goes for partials, but in this case the global context can’t help us — if we’ve passed in, say, a string to a partial template, the global context in the partial will refer to that string, and we won’t be able to call methods defined on the page context.

    The solution to this problem lies in passing in more than one piece of data when including the partial. However, we’re only allowed to provide one argument to the partial call. We can, however, make this argument a compund data type, commonly a map (known as a dictionary or a hash in other programming languages).

    In this map, we can, for example, have a Page key set to the current page object, along with other keys for any custom data to pass in. The page object will then be available as .Page in the partial, and the other values of the map are accessed similarly. A map is created using the dict template function, which takes an even number of arguments, interpreted alternately as a key, its value, a key, its value and so on.

    In our example template, let’s move the code for our featured and listed content into partials. For the featured content, it’s enough to pass in the featured page object. The listed content, however, needs access to the .IsHome method in addition to the particular listed content being rendered. As mentioned earlier, while .IsHome is available on the page object for the listed page as well, that won’t give us the correct answer — we want to know if the main page being rendered is the home page.

    We could instead pass in a boolean set to the result of calling .IsHome, but perhaps the partial will need access to other page methods in the future — let’s go with passing in the main page object as well as the listed page object. In our example, the main page is found in $ and the listed page in .. So, in the map passed to the listed partial, the key Page gets the value $ while the key “Listed” gets the value .. This is the updated main template:

    <body>
      <nav>
        <a class="logo" href="{{ "/" | relURL }}">
          <img src="http://www.smashingmagazine.com/img/tower-logo.svg">
          <img src="/img/tower-claim.svg">
        </a>
        <ul>
          <li><a href="/">Home</a></li>
          <li><a href="http://www.smashingmagazine.com/blog/">Blog</a></li>
        </ul>
      </nav>
      <section class="featured">
        <div class="container">
          {{ with .GetPage .Params.featured }}
            {{ partial "partials/featured.html" . }}
          {{ end }}
        </div>
      </section>
      <section class="content">
        <div class="container">
          <h1>{{ .Title }}</h1>
          {{ .Content }}
        </div>
      </section>
      <aside>
        <h2>{{ .Params.banner.headline }}</h2>
        <p>{{ .Params.banner.subline}}</p>
      </aside>
      <section class="listing">
        <div class="container">
          <h1>{{ .Params.listing_headline }}</h1>
          <div>
            {{ range .Params.listing }}
              {{ with $.GetPage . }}
                {{ partial "partials/listed.html" (dict "Page" $ "Listed" .) }}
              {{ end }}
            {{ end }}
          </div>
        </div>
      </section>
    </body>
    

    The content of our “featured” partial does not change compared to when it was part of the list template:

    <article>
      <h2>{{ .Title }}</h2>
      {{ .Summary }}
      <p><a href="{{ .Permalink }}">Read more →</a></p>
    </article>
    

    Our partial for listed content, however, reflects the fact that the original page object is now found in .Page while the listed piece of content is found in .Listed:

    <article{{ if .Page.IsHome }} class="home"{{ end }}>
      <h2>{{ .Listed.Title }}</h2>
      {{ .Listed.Summary }}
      <p><a href="{{ .Listed.Permalink }}">Read more →</a></p>
    </article>
    

    Hugo also provides base template functionality which lets you extend a common base template, as opposed to including subtemplates. In this case, the context works similarly: when extending a base template, you provide the data that will constitute the original context in that template.

    Custom Variables

    It is also possible to assign and reassign your own custom variables in a Hugo template. These will be available in the template where they’re declared, but won’t make their way into any partials or base templates unless we explicitly pass them on. A custom variable declared inside a “block” like the one specified by an if statement will only be available inside that block — if we want to refer to it outside the block, we need to declare it outside the block, then modify it inside the block as required.

    Custom variables have names prefixed by a dollar sign ($). To declare a variable and give it a value at the same time, use the := operator. Subsequent assignments to the variable use the = operator (without colon). A variable can’t be assigned to before being declared, and it can’t be declared without giving it a value.

    One use case for custom variables is simplifying long function calls by assigning some intermediate result to an appropriately named variable. For example, we could assign the featured page object to a variable named $featured and then supply this variable to the with statement. We could also put the data to supply to the “listed” partial in a variable and give that to the partial call.

    Here’s what our template would look like with those changes:

    <section class="featured">
      <div class="container">
        {{ $featured := .GetPage .Params.featured }}
        {{ with $featured }}
          {{ partial "partials/featured.html" . }}
        {{ end }}
      </div>
    </section>
    <section class="content">
      ...
    </section>
    <aside>
      ...
    </aside>
    <section class="listing">
      <div class="container">
        <h1>{{ .Params.listing_headline }}</h1>
        <div>
          {{ range .Params.listing }}
            {{ with $.GetPage . }}
              {{ $context := (dict "Page" $ "Listed" .) }}
              {{ partial "partials/listed.html" $context }}
            {{ end }}
          {{ end }}
        </div>
      </div>
    </section>
    

    Based on my experience with Hugo, I’d recommend using custom variables liberally as soon as you’re trying to implement some more involved logic in a template. While it’s natural to try to keep your code concise, this may easily make things less clear than they could be, confusing you and others.

    Instead, use descriptively named variables for each step and don’t worry about using two lines (or three, or four, etc.) where one would do.

    .Scratch

    Finally, let’s cover the .Scratch mechanism. In earlier versions of Hugo, custom variables could only be assigned to once; it was not possible to redefine a custom variable. Nowadays, custom variables can be redefined, which makes .Scratch less important, though it still has its uses.

    In short, .Scratch is a scratch area allowing you to set and modify your own variables, like custom variables. Unlike custom variables, .Scratch belongs to the page context, so passing that context on to a partial, for example, will bring the scratch variables along with it automatically.

    You can set and retrieve variables on .Scratch by calling its methods Set and Get. There are more methods than these, for example for setting and updating compound data types, but these two ones will suffice for our needs here. Set takes two parameters: the key and the value for the data you want to set. Get only takes one: the key for the data you want to retrieve.

    Earlier, we used dict to create a map data structure to pass multiple pieces of data to a partial. This was done so that the partial for a listed page would have access to both the original page context and the particular listed page object. Using .Scratch is not necessarily a better or worse way to do this — whichever is preferrable may depend on the situation.

    Let’s see what our list template would look like using .Scratch instead of dict to pass data to the partial. We call $.Scratch.Get (again using the global context) to set the scratch variable “listed” to . — in this case, the listed page object. Then we pass in just the page object, $, to the partial. The scratch variables will follow along automatically.

    <section class="listing">
      <div class="container">
        <h1>{{ .Params.listing_headline }}</h1>
        <div>
          {{ range .Params.listing }}
            {{ with $.GetPage . }}
              {{ $.Scratch.Set "listed" . }}
              {{ partial "partials/listed.html" $ }}
            {{ end }}
          {{ end }}
        </div>
      </div>
    </section>
    

    This would require some modification to the listed.html partial as well — the original page context is now available as “the dot” while the listed page is retrieved from the .Scratch object. We’ll use a custom variable to simplify access to the listed page:

    <article{{ if .IsHome }} class="home"{{ end }}>
      {{ $listed := .Scratch.Get "listed" }}
      <h2>{{ $listed.Title }}</h2>
      {{ $listed.Summary }}
      <p><a href="{{ $listed.Permalink }}">Read more →</a></p>
    </article>
    

    One argument for doing things this way is consistency. Using .Scratch, you can make it a habit to always pass in the current page object to any partial, adding any extra data as scratch variables. Then, whenever you write or edit your partials, you know that . is a page object. Of course, you can establish a convention for yourself using a passed-in map as well: always sending along the page object as .Page, for example.

    Conclusion

    When it comes to context and data, a static site generator brings both benefits and limitations. On one hand, an operation that is too inefficient when run for every page visit may be perfectly good when run only once as the page is compiled. On the other hand, it may surprise you how often it would be useful to have access to some part of the network request even on a predominantly static site.

    To handle query string parameters, for example, on a static site, you’d have to resort to JavaScript or some proprietary solution like Netlify’s redirects. The point here is that while the jump from a dynamic to a static site is simple in theory, it does take a shift in mindset. In the beginning, it’s easy to fall back on your old habits, but practice will make perfect.

    With that, we conclude our look at data management in the Hugo static site generator. Even though we focused only on a narrow sector of its functionality, there are certainly things we didn’t cover that could have been included. Nevertheless, I hope this article gave you some added insight into how data flows from content files, to templates, to subtemplates and how it can be modified along the way.

    Note: If you already have some Hugo experience, we have a nice resource for you, quite appropriately residing on our aforementioned, Hugo-driven “Learn” site! When you just need to check the order of the arguments to the replaceRE function, how to retrieve the next page in a section, or what the “expiration date” front matter field is called, a cheat sheet comes in handy. We’ve put together just such a reference, so download a Hugo cheat sheet, in a package also featuring a host of other cheat sheets on everything from Git to the Visual Studio Code editor.

    Further Reading

    If you’re looking for more information on Hugo, here are some nice resources:

    Smashing Editorial
    (vf, il)

    Source link

    web design

    Dynamic Static Typing In TypeScript — Smashing Magazine

    01/29/2021

    About The Author

    Stefan Baumgartner is a software architect based in Austria. He has published online since the late 1990s, writing for Manning, Smashing Magazine, and A List …
    More about
    Stefan

    In this article, we look at some of the more advanced features of TypeScript, like union types, conditional types, template literal types, and generics. We want to formalize the most dynamic JavaScript behavior in a way that we can catch most bugs before they happen. We apply several learnings from all chapters of TypeScript in 50 Lessons, a book we’ve published here on Smashing Magazine late 2020. If you are interested in learning more, be sure to check it out!

    JavaScript is an inherently dynamic programming language. We as developers can express a lot with little effort, and the language and its runtime figure out what we intended to do. This is what makes JavaScript so popular for beginners, and which makes experienced developers productive! There is a caveat, though: We need to be alert! Mistakes, typos, correct program behavior: A lot of that happens in our heads!

    Take a look at the following example.

    app.get("/api/users/:userID", function(req, res) {
      if (req.method === "POST") {
        res.status(20).send({
          message: "Got you, user " + req.params.userId
        });
      }
    })
    

    We have an https://expressjs.com/-style server that allows us to define a route (or path), and executes a callback if the URL is requested.

    The callback takes two arguments:

    1. The request object.
      Here we get information on the HTTP method used (e.g GET, POST, PUT, DELETE), and additional parameters that come in. In this example userID should be mapped to a parameter userID that, well, contains the user’s ID!
    2. The response or reply object.
      Here we want to prepare a proper response from the server to the client. We want to send correct status codes (method status) and send JSON output over the wire.

    What we see in this example is heavily simplified, but gives a good idea what we are up to. The example above is also riddled with errors! Have a look:

    app.get("/api/users/:userID", function(req, res) {
      if (req.method === "POST") { /* Error 1 */
        res.status(20).send({ /* Error 2 */
          message: "Welcome, user " + req.params.userId /* Error 3 */
        });
      }
    })
    

    Oh wow! Three lines of implementation code, and three errors? What has happened?

    1. The first error is nuanced. While we tell our app that we want to listen to GET requests (hence app.get), we only do something if the request method is POST. At this particular point in our application, req.method can’t be POST. So we would never send any response, which might lead to unexpected timeouts.
    2. Great that we explicitly send a status code! 20 isn’t a valid status code, though. Clients might not understand what’s happening here.
    3. This is the response we want to send back. We access the parsed arguments but have a mean typo. It’s userID not userId. All our users would be greeted with “Welcome, user undefined!”. Something you definitely have seen in the wild!

    And things like that happen! Especially in JavaScript. We gain expressiveness – not once did we have to bother about types – but have to pay close attention to what we’re doing.

    This is also where JavaScript gets a lot of backlash from programmers who aren’t used to dynamic programming languages. They usually have compilers pointing them to possible problems and catching errors upfront. They might come off as snooty when they frown upon the amount of extra work you have to do in your head to make sure everything works right. They might even tell you that JavaScript has no types. Which is not true.

    Anders Hejlsberg, the lead architect of TypeScript, said in his MS Build 2017 keynote that “it’s not that JavaScript has no type system. There is just no way of formalizing it”.

    And this is TypeScript’s main purpose. TypeScript wants to understand your JavaScript code better than you do. And where TypeScript can’t figure out what you mean, you can assist by providing extra type information.

    Basic Typing

    And this is what we’re going to do right now. Let’s take the get method from our Express-style server and add enough type information so we can exclude as many categories of errors as possible.

    We start with some basic type information. We have an app object that points to a get function. The get function takes path, which is a string, and a callback.

    const app = {
      get, /* post, put, delete, ... to come! */
    };
    
    function get(path: string, callback: CallbackFn) {
      // to be implemented --> not important right now
    }
    

    While string is a basic, so-called primitive type, CallbackFn is a compound type that we have to explicitly define.

    CallbackFn is a function type that takes two arguments:

    • req, which is of type ServerRequest
    • reply which is of type ServerReply

    CallbackFn returns void.

    type CallbackFn = (req: ServerRequest, reply: ServerReply) => void;
    

    ServerRequest is a pretty complex object in most frameworks. We do a simplified version for demonstration purposes. We pass in a method string, for "GET", "POST", "PUT", "DELETE", etc. It also has a params record. Records are objects that associate a set of keys with a set of properties. For now, we want to allow for every string key to be mapped to a string property. We refactor this one later.

    type ServerRequest = {
      method: string;
      params: Record<string, string>;
    };
    

    For ServerReply, we lay out some functions, knowing that a real ServerReply object has much more. A send function takes an optional argument with the data we want to send. And we have the possibility to set a status code with the status function.

    type ServerReply = {
      send: (obj?: any) => void;
      status: (statusCode: number) => ServerReply;
    };
    

    That’s already something, and we can rule out a couple of errors:

    app.get("/api/users/:userID", function(req, res) {
      if(req.method === 2) {
    //   ^^^^^^^^^^^^^^^^^ 💥 Error, type number is not assignable to string
    
        res.status("200").send()
    //             ^^^^^ 💥 Error, type string is not assignable to number
      }
    })
    

    But we still can send wrong status codes (any number is possible) and have no clue about the possible HTTP methods (any string is possible). Let’s refine our types.

    Smaller Sets

    You can see primitive types as a set of all possible values of that certain category. For example, string includes all possible strings that can be expressed in JavaScript, number includes all possible numbers with double float precision. boolean includes all possible boolean values, which are true and false.

    TypeScript allows you to refine those sets to smaller subsets. For example, we can create a type Method that includes all possible strings we can receive for HTTP methods:

    type Methods= "GET" | "POST" | "PUT" | "DELETE";
    
    type ServerRequest = {
      method: Methods;
      params: Record<string, string>;
    };
    

    Method is a smaller set of the bigger string set. Method is a union type of literal types. A literal type is the smallest unit of a given set. A literal string. A literal number. There is no ambiguity. It’s just "GET". You put them in a union with other literal types, creating a subset of whatever bigger types you have. You can also do a subset with literal types of both string and number, or different compound object types. There are lots of possibilities to combine and put literal types into unions.

    This has an immediate effect on our server callback. Suddenly, we can differentiate between those four methods (or more if necessary), and can exhaust all possibilites in code. TypeScript will guide us:

    app.get("/api/users/:userID", function (req, res) {
      // at this point, TypeScript knows that req.method
      // can take one of four possible values
      switch (req.method) {
        case "GET":
          break;
        case "POST":
          break;
        case "DELETE":
          break;
        case "PUT":
          break;
        default:
          // here, req.method is never
          req.method;
      }
    });
    

    With every case statement you make, TypeScript can give you information on the available options. Try it out for yourself. If you exhausted all options, TypeScript will tell you in your default branch that this can never happen. This is literally the type never, which means that you possibly have reached an error state that you need to handle.

    That’s one category of errors less. We know now exactly which possible HTTP methods are available.

    We can do the same for HTTP status codes, by defining a subset of valid numbers that statusCode can take:

    type StatusCode = 
      100 | 101 | 102 | 200 | 201 | 202 | 203 | 204 | 205 | 
      206 | 207 | 208 | 226 | 300 | 301 | 302 | 303 | 304 | 
      305 | 306 | 307 | 308 | 400 | 401 | 402 | 403 | 404 |
      405 | 406 | 407 | 408 | 409 | 410 | 411 | 412 | 413 |
      414 | 415 | 416 | 417 | 418 | 420 | 422 | 423 | 424 | 
      425 | 426 | 428 | 429 | 431 | 444 | 449 | 450 | 451 | 
      499 | 500 | 501 | 502 | 503 | 504 | 505 | 506 | 507 | 
      508 | 509 | 510 | 511 | 598 | 599;
    
    type ServerReply = {
      send: (obj?: any) => void;
      status: (statusCode: StatusCode) => ServerReply;
    };
    

    Type StatusCode is again a union type. And with that, we exclude another category of errors. Suddenly, code like that fails:

    app.get("/api/user/:userID", (req, res) => {
     if(req.method === "POS") {
    //   ^^^^^^^^^^^^^^^^^^^ 'Methods' and '"POS"' have no overlap.
        res.status(20)
    //             ^^ '20' is not assignable to parameter of type 'StatusCode'
     }
    })
    

    And our software becomes a lot safer! But we can do more!

    Enter Generics

    When we define a route with app.get, we implicitly know that the only HTTP method possible is "GET". But with our type definitions, we still have to check for all possible parts of the union.

    The type for CallbackFn is correct, as we could define callback functions for all possible HTTP methods, but if we explicitly call app.get, it would be nice to save some extra steps which are only necessary to comply with typings.

    TypeScript generics can help! Generics are one of the major features in TypeScript that allow you to get the most dynamic behaviour out of static types. In TypeScript in 50 Lessons, we spend the last three chapters digging into all the intricacies of generics and their unique functionality.

    What you need to know right now is that we want to define ServerRequest in a way that we can specify a part of Methods instead of the entire set. For that, we use the generic syntax where we can define parameters as we would do with functions:

    type ServerRequest<Met extends Methods> = {
      method: Met;
      params: Record<string, string>;
    };
    

    This is what happens:

    1. ServerRequest becomes a generic type, as indicated by the angle brackets
    2. We define a generic parameter called Met, which is a subset of type Methods
    3. We use this generic parameter as a generic variable to define the method.

    I also encourage you to check out my article on naming generic parameters.

    With that change, we can specify different ServerRequests without duplicating things:

    type OnlyGET = ServerRequest;
    type OnlyPOST = ServerRequest;
    type POSTorPUT = ServerRquest;
    

    Since we changed the interface of ServerRequest, we have to make changes to all our other types that use ServerRequest, like CallbackFn and the get function:

    type CallbackFn<Met extends Methods> = (
      req: ServerRequest<Met>,
      reply: ServerReply
    ) => void;
    
    function get(path: string, callback: CallbackFn<"GET">) {
      // to be implemented
    }
    

    With the get function, we pass an actual argument to our generic type. We know that this won’t be just a subset of Methods, we know exactly which subset we are dealing with.

    Now, when we use app.get, we only have on possible value for req.method:

    app.get("/api/users/:userID", function (req, res) {
      req.method; // can only be get
    });
    

    This ensures that we don’t assume that HTTP methods like "POST" or similar are available when we create an app.get callback. We know exactly what we are dealing with at this point, so let’s reflect that in our types.

    We already did a lot to make sure that request.method is reasonably typed and represents the actual state of affairs. One nice benefit we get with subsetting the Methods union type is that we can create a general purpose callback function outside of app.get that is type-safe:

    const handler: CallbackFn<"PUT" | "POST"> = function(res, req) {
      res.method // can be "POST" or "PUT"
    };
    
    const handlerForAllMethods: CallbackFn<Methods> = function(res, req) {
      res.method // can be all methods
    };
    
    
    app.get("/api", handler);
    //              ^^^^^^^ 💥 Nope, we don’t handle "GET"
    
    app.get("/api", handlerForAllMethods); // 👍 This works
    

    Typing Params

    What we haven’t touched yet is typing the params object. So far, we get a record that allows accessing every string key. It’s our task now to make that a little bit more specific!

    We do that by adding another generic variable. One for methods, one for the possible keys in our Record:

    type ServerRequest<Met extends Methods, Par extends string = string> = {
      method: Met;
      params: Record<Par, string>;
    };
    

    The generic type variable Par can be a subset of type string, and the default value is every string. With that, we can tell ServerRequest which keys we expect:

    // request.method = "GET"
    // request.params = {
    //   userID: string
    // }
    type WithUserID = ServerRequest
    

    Let’s add the new argument to our get function and the CallbackFn type, so we can set the requested parameters:

    function get<Par extends string = string>(
      path: string,
      callback: CallbackFn<"GET", Par>
    ) {
      // to be implemented
    }
    
    type CallbackFn<Met extends Methods, Par extends string> = (
      req: ServerRequest<Met, Par>,
      reply: ServerReply
    ) => void;
    

    If we don’t set Par explicitly, the type works as we are used to, since Par defaults to string. If we set it though, we suddenly have a proper definition for the req.params object!

    app.get<"userID">("/api/users/:userID", function (req, res) {
      req.params.userID; // Works!!
      req.params.anythingElse; // 💥 doesn’t work!!
    });
    

    That’s great! There is one little thing that can be improved, though. We still can pass every string to the path argument of app.get. Wouldn’t it be better if we could reflect Par in there as well?

    We can! With the release of version 4.1, TypeScript is able to create template literal types. Syntactically, they work just like string template literals, but on a type level. Where we were able to split the set string into subsets with string literal types (like we did with Methods), template literal types allow us to include an entire spectrum of strings.

    Let’s create a type called IncludesRouteParams, where we want to make sure that Par is properly included in the Express-style way of adding a colon in front of the parameter name:

    type IncludesRouteParams<Par extends string> =
      | `${string}/:${Par}`
      | `${string}/:${Par}/${string}`;
    

    The generic type IncludesRouteParams takes one argument, which is a subset of string. It creates a union type of two template literals:

    1. The first template literal starts with any string, then includes a / character followed by a : character, followed by the parameter name. This makes sure that we catch all cases where the parameter is at the end of the route string.
    2. The second template literal starts with any string, followed by the same pattern of /, : and the parameter name. Then we have another / character, followed by any string. This branch of the union type makes sure we catch all cases where the parameter is somewhere within a route.

    This is how IncludesRouteParams with the parameter name userID behaves with different test cases:

    const a: IncludeRouteParams = "/api/user/:userID" // 👍
    const a: IncludeRouteParams = "/api/user/:userID/orders" // 👍
    const a: IncludeRouteParams = "/api/user/:userId" // 💥
    const a: IncludeRouteParams = "/api/user" // 💥
    const a: IncludeRouteParams = "/api/user/:userIDAndmore" // 💥
    

    Let’s include our new utility type in the get function declaration.

    function get<Par extends string = string>(
      path: IncludesRouteParams<Par>,
      callback: CallbackFn<"GET", Par>
    ) {
      // to be implemented
    }
    
    app.get<"userID">(
      "/api/users/:userID",
      function (req, res) {
        req.params.userID; // YEAH!
      }
    );
    

    Great! We get another safety mechanism to ensure that we don’t miss out on adding the parameters to the actual route! How powerful.

    Generic bindings

    But guess what, I’m still not happy with it. There are a few issues with that approach that become apparent the moment your routes get a little more complex.

    1. The first issue I have is that we need to explicitly state our parameters in the generic type parameter. We have to bind Par to "userID", even though we would specify it anyway in the path argument of the function. This is not JavaScript-y!
    2. This approach only handles one route parameter. The moment we add a union, e.g "userID" | "orderId" the failsafe check is satisfied with only one of those arguments being available. That’s how sets work. It can be one, or the other.

    There must be a better way. And there is. Otherwise, this article would end on a very bitter note.

    Let’s inverse the order! Let’s not try to define the route params in a generic type variable, but rather extract the variables from the path we pass as the first argument of app.get.

    To get to the actual value, we have to see out how generic binding works in TypeScript. Let’s take this identity function for example:

    function identity<T>(inp: T) : T {
      return inp
    }
    

    It might be the most boring generic function you ever see, but it illustrates one point perfectly. identity takes one argument, and returns the same input again. The type is the generic type T, and it also returns the same type.

    Now we can bind T to string, for example:

    const z = identity<string>("yes"); // z is of type string
    

    This explicitly generic binding makes sure that we only pass strings to identity, and since we explicitly bind, the return type is also string. If we forget to bind, something interesting happens:

    const y = identity("yes") // y is of type "yes"
    

    In that case, TypeScript infers the type from the argument you pass in, and binds T to the string literal type "yes". This is a great way of converting a function argument to a literal type, which we then use in our other generic types.

    Let’s do that by adapting app.get.

    function get<Path extends string = string>(
      path: Path,
      callback: CallbackFn<"GET", ParseRouteParams<Path>>
    ) {
      // to be implemented
    }
    

    We remove the Par generic type and add Path. Path can be a subset of any string. We set path to this generic type Path, which means the moment we pass a parameter to get, we catch its string literal type. We pass Path to a new generic type ParseRouteParams which we haven’t created yet.

    Let’s work on ParseRouteParams. Here, we switch the order of events around again. Instead of passing the requested route params to the generic to make sure the path is alright, we pass the route path and extract the possible route params. For that, we need to create a conditional type.

    Conditional Types And Recursive Template Literal Types

    Conditional types are syntactically similar to the ternary operator in JavaScript. You check for a condition, and if the condition is met, you return branch A, otherwise, you return branch B. For example:

    type ParseRouteParams<Rte> = 
      Rte extends `${string}/:${infer P}`
      ? P
      : never;
    

    Here, we check if Rte is a subset of every path that ends with the parameter at the end Express-style (with a preceding "/:"). If so, we infer this string. Which means we capture its contents into a new variable. If the condition is met, we return the newly extracted string, otherwise, we return never, as in: “There are no route parameters”,

    If we try it out, we get something like that:

    type Params = ParseRouteParams<"/api/user/:userID"> // Params is "userID"
    
    type NoParams = ParseRouteParams<"/api/user"> // NoParams is never --> no params!
    

    Great, that’s already much better than we did earlier. Now, we want to catch all other possible parameters. For that, we have to add another condition:

    type ParseRouteParams<Rte> = Rte extends `${string}/:${infer P}/${infer Rest}`
      ? P | ParseRouteParams<`/${Rest}`>
      : Rte extends `${string}/:${infer P}`
      ? P
      : never;
    

    Our conditional type works now as follows:

    1. In the first condition, we check if there is a route parameter somewhere in between the route. If so, we extract both the route parameter and everything else that comes after that. We return the newly found route parameter P in a union where we call the same generic type recursively with the Rest. For example, if we pass the route "/api/users/:userID/orders/:orderID" to ParseRouteParams, we infer "userID" into P, and "orders/:orderID" into Rest. We call the same type with Rest
    2. This is where the second condition comes in. Here we check if there is a type at the end. This is the case for "orders/:orderID". We extract "orderID" and return this literal type.
    3. If there is no more route parameter left, we return never.

    Dan Vanderkam shows a similar, and more elaborate type for ParseRouteParams, but the one you see above should work as well. If we try out our newly adapted ParseRouteParams, we get something like this:

    // Params is "userID"
    type Params = ParseRouteParams

    Let’s apply this new type and see what our final usage of app.get looks like.

    app.get("/api/users/:userID/orders/:orderID", function (req, res) {
      req.params.userID; // YES!!
      req.params.orderID; // Also YES!!!
    });
    

    Wow. That just looks like the JavaScript code we had at the beginning!

    Static Types For Dynamic Behavior

    The types we just created for one function app.get make sure that we exclude a ton of possible errors:

    1. We can only pass proper numeric status codes to res.status()
    2. req.method is one of four possible strings, and when we use app.get, we know it only be "GET"
    3. We can parse route params and make sure that we don’t have any typos inside our callback

    If we look at the example from the beginning of this article, we get the following error messages:

    app.get("/api/users/:userID", function(req, res) {
      if (req.method === "POST") {
    //    ^^^^^^^^^^^^^^^^^^^^^
    //    This condition will always return 'false'
    //     since the types '"GET"' and '"POST"' have no overlap.
        res.status(20).send({
    //             ^^
    //             Argument of type '20' is not assignable to 
    //             parameter of type 'StatusCode'
          message: "Welcome, user " + req.params.userId 
    //                                           ^^^^^^
    //         Property 'userId' does not exist on type 
    //    '{ userID: string; }'. Did you mean 'userID'?
        });
      }
    })
    

    And all that before we actually run our code! Express-style servers are a perfect example of the dynamic nature of JavaScript. Depending on the method you call, the string you pass for the first argument, a lot of behavior changes inside the callback. Take another example and all your types look entirely different.

    But with a few well-defined types, we can catch this dynamic behavior while editing our code. At compile time with static types, not at runtime when things go boom!

    And this is the power of TypeScript. A static type system that tries to formalize all the dynamic JavaScript behavior we all know so well. If you want to try the example we just created, head over to the TypeScript playground and fiddle around with it.


    TypeScript in 50 Lessons by Stefan BaumgartnerIn this article, we touched upon many concepts. If you’d like to know more, check out TypeScript in 50 Lessons, where you get a gentle introduction to the type system in small, easily digestible lessons. Ebook versions are available immediately, and the print book will make a great reference for your coding library.

    Smashing Editorial
    (vf, il)

    Source link

    web design

    How To Migrate From WordPress To The Eleventy Static Site Generator — Smashing Magazine

    12/04/2020

    About The Author

    Scott Dawson lives in Trumansburg, New York. He’s a web designer and developer and enjoys writing, acting, creating art, and making music. Scott is a front-end …
    More about
    Scott

    If you’re a designer or developer with intermediate knowledge of HTML and JavaScript, and know your way around GitHub and the command line, this tutorial is for you. We’re going to walk step-by-step through converting a WordPress site into a static site generated from Markdown.

    Eleventy is a static site generator. We’re going to delve into why you’d want to use a static site generator, get into the nitty-gritty of converting a simple WordPress site to Eleventy, and talk about the pros and cons of managing content this way. Let’s go!

    What Is A Static Site Generator?

    I started my web development career decades ago in the mid-1990s when HTML and CSS were the only things you needed to get a website up and running. Those simple, static websites were fast and responsive. Fast forward to the present day, though, and a simple website can be pretty complicated.

    In the case of WordPress, let’s think through what it takes to render a web page. WordPress server-side PHP, running on a host’s servers, does the heavy lifting of querying a MySQL database for metadata and content, chooses the right versions of images stored on a static file system, and merges it all into a theme-based template before returning it to the browser. It’s a dynamic process for every page request, though most of the web pages I’ve seen generated by WordPress aren’t really that dynamic. Most visitors, if not all, experience identical content.

    Static site generators flip the model right back to that decades-old approach. Instead of assembling web pages dynamically, static site generators take content in the form of Markdown, merge it with templates, and create static web pages. This process happens outside of the request loop when users are browsing your site. All content has been pre-generated and is served lightning-fast upon each request. Web servers are quite literally doing what they advertise: serving. No database. No third-party plugins. Just pure HTML, CSS, JavaScript, and images. This simplified tech stack also equates to a smaller attack surface for hackers. There’s a little server-side infrastructure to exploit, so your site is inherently more secure.

    Leading static site generators are feature-rich, too, and that can make a compelling argument for bidding adieu to the tech stacks that are hallmarks of modern content management systems.

    If you’ve been in this industry for a while, you may remember Macromedia’s (pre-Adobe) Dreamweaver product. I loved the concept of library items and templates, specifically how they let me create consistency across multiple web pages. In the case of Eleventy, the concepts of templates, filters, shortcodes, and plugins are close analogs. I got started on this whole journey after reading about Smashing’s enterprise conversion to the JamStack. I also read Mathias Biilmann & Phil Hawksworth’s free book called Modern Web Development on the JAMstack and knew I was ready to roll up my sleeves and convert something of my own.

    Why Not Use A Static Site Generator?

    Static site generators require a bit of a learning curve. You’re not going to be able to easily pass off editorial functions to input content, and specific use cases may preclude you from using one. Most of the work I’ll show is done in Markdown and via the command line. That said, there are many options for using static site generators in conjunction with dynamic data, e-commerce, commenting, and rating systems.

    You don’t have to convert your entire site over all at once, either. If you have a complicated setup, you might start small and see how you feel about static site generation before putting together a plan to solve something at an enterprise scale. You can also keep using WordPress as a best-in-class headless content management system and use an SSG to serve WordPress content.

    How I Chose Eleventy As A Static Site Generator

    Do a quick search for popular static site generators and you’ll find many great options to start with: Eleventy, Gatsby, Hugo, and Jekyll were leading contenders on my list. How to choose? I did what came naturally and asked some friends. Eleventy was a clear leader in my Twitter poll, but what clinched it was a comment that said “@eleven_ty feels very approachable if one doesn’t know what one is doing.” Hey, that’s me! I can unhappily get caught up in analysis paralysis. Not today… it felt good to choose Eleventy based on a poll and a comment. Since then, I’ve converted four WordPress sites to Eleventy, using GitHub to store the code and Netlify to securely serve the files. That’s exactly what we’re going to do today, so let’s roll up our sleeves and dive in!

    Getting Started: Bootstrapping The Initial Site

    Eleventy has a great collection of starter projects. We’ll use Dan Urbanowicz’s eleventy-netlify-boilerplate as a starting point, advertised as a “template for building a simple blog website with Eleventy and deploying it to Netlify. Includes Netlify CMS and Netlify Forms.” Click “Deploy to netlify” to get started. You’ll be prompted to connect Netlify to GitHub, name your repository (I’m calling mine smashing-eleventy-dawson), and then “Save & Deploy.”

    With that done, a few things happened:

    1. The boilerplate project was added to your GitHub account.
    2. Netlify assigned a dynamic name to the project, built it, and deployed it.
    3. Netlify configured the project to use Identity (if you want to use CMS features) and Forms (a simple contact form).
    Netlify’s initial deployment screen
    This is Netlify’s screen that shows our initial deployment is completed. (Large preview)

    As the screenshot suggests, you can procure or map a domain to the project, and also secure the site with HTTPS. The latter feature was a really compelling selling point for me since my host had been charging an exorbitant fee for SSL. On Netlify, it’s free.

    I clicked Site Settings, then Change Site Name to create a more appropriate name for my site. As much as I liked jovial-goldberg-e9f7e9, elizabeth-dawson-piano is more appropriate. After all, that’s the site we’re converting! When I visit elizabeth-dawson-piano.netlify.app, I see the boilerplate content. Awesome!

    Eleventy Netlify Boilerplate with no customizations
    Our site has been built and is now ready for customizations. (Large preview)

    Let’s download the new repository to our local machine so we can start customizing the site. My GitHub repository for this project gives me the git clone command I can use in Visual Studio Code’s terminal to copy the files:

    Then we follow the remaining instructions in the boilerplate’s README file to install dependencies locally, edit the _data/metadata.json file to match the project and run the project locally.

    • npm install @11ty/eleventy
    • npm install
    • npx eleventy --serve --quiet

    With that last command, Eleventy launches the local development site at localhost:8080 and starts watching for changes.

    Preserving WordPress Posts, Pages, And Images

    The site we’re converting from is an existing WordPress site at elizabethrdawson.wordpress.com. Although the site is simple, it’d be great to leverage as much of that existing content as possible. Nobody really likes to copy and paste that much, right? WordPress makes it easy using its export function.

    WordPress Export Content screen
    WordPress lets you export content and images. (Large preview)

    Export Content gives me a zip file containing an XML extract of the site content. Export Media Library gives me a zip file of the site’s images. The site that I’ve chosen to use as a model for this exercise is a simple 3-page site, and it’s hosted on WordPress.com. If you’re self-hosting, you can go to Tools > Export to get the XML extract, but depending on your host, you may need to use FTP to download the images.

    If you open the XML file in your editor, it’s going to be of little use to you. We need a way to get individual posts into Markdown, which is the language we’re going to use with Eleventy. Lucky for us, there’s a package for converting WordPress posts and pages to Markdown. Clone that repository to your machine and put the XML file in the same directory. Your directory listing should look something like this:

    WordPress XML directory listing
    Directory listing for WordPress-export-to-markdown including WordPress’ XML file. (Large preview)

    If you want to extract posts from the XML, this will work out of the box. However, our sample site has three pages, so we need to make a small adjustment. On line 39 of parser.js, change “post” to “page” before continuing.

    Code snippet showing changes in wordpress-export-to-markdown
    Configure wordpress-export-to-markdown to export pages, not posts. (Large preview)

    Make sure you do an “npm install” in the wordpress-export-to-markdown directory, then enter “node index.js” and follow the prompts.

    That process created three files for me: welcome.md, about.md, and contact.md. In each, there’s front matter that describes the page’s title and date, and the Markdown of the content extracted from the XML. ‘Front matter’ may be a new term for you, and if you look at the section at the top of the sample .md files in the “pages” directory, you’ll see a section of data at the top of the file. Eleventy supports a variety of front matter to help customize your site, and title and date are just the beginning. In the sample pages, you’ll see this in the front matter section:

    eleventyNavigation:
      key: Home
      order: 0
    

    Using this syntax, you can have pages automatically added to the site’s navigation. I wanted to preserve this with my new pages, so I copied and pasted the content of the pages into the existing boilerplate .md files for home, contact, and about. Our sample site won’t have a blog for now, so I’m deleting the .md files from the “posts” directory, too. Now my local preview site looks like this, so we’re getting there!

    Local website preview after customizing content
    Now that we’ve customized some content, our local environment shows the current state of the site. (Large preview)

    This seems like a fine time to commit and push the updates to GitHub. A few things happen when I commit updates. Upon notification from GitHub that updates were made, Netlify runs the build and updates the live site. It’s the same process that happens locally when you’re updating and saving files: Eleventy converts the Markdown files to HTML pages. In fact, if you look in your _site directory locally, you’ll see the HTML version of your website, ready for static serving. So, as I navigate to elizabeth-dawson-piano.netlify.app shortly after committing, I see the same updates I saw locally.

    Adding Images

    We’ll use images from the original site. In the .eleventy.js file, you’ll see that static image assets should go in the static/img folder. Each page will have a hero image, and here’s where front matter works really well. In the front matter section of each page, I’ll add a reference to the hero image:

    hero: `/static/img/performance.jpg`
    

    Eleventy keeps page layouts in the _includes/layouts folder. base.njk is used by all page types, so we’ll add this code just under the navigation since that’s where we want our hero image.

    {% if (hero) %}
    <img class="page-hero" src="http://www.smashingmagazine.com/{{ hero }}" alt="Hero image for {{ title }}" />
    {% endif %}
    

    I also included an image tag for the picture of Elizabeth on the About page, using a CSS class to align it and give it proper padding. Now’s a good time to commit and see exactly what changed.

    Embedding A YouTube Player With A Plugin

    There are a few YouTube videos on the home page. Let’s use a plugin to create Youtube’s embed code automatically. eleventy-plugin-youtube-embed is a great option for this. The installation instructions are pretty clear: install the package with npm and then include it in our .eleventy.js file. Without any further changes, those YouTube URLs are transformed into embedded players. (see commit)

    Using Collections And Filters

    We don’t need a blog for this site, but we do need a way to let people know about upcoming events. Our events — for all intents and purposes — will be just like blog posts. Each has a title, a description, and a date.

    There are a few steps we need to create this new collection-based page:

    • Create a new events.md file in our pages directory.
    • Add a few events to our posts directory. I’ve added .md files for a holiday concert, a spring concert, and a fall recital.
    • Create a collection definition in .eleventy.js so we can treat these events as a collection. Here’s how the collection is defined: we gather all Markdown files in the posts directory and filter out anything that doesn’t have a location specified in the front matter.
    eleventyConfig.addCollection("events", (collection) =>
        collection.getFilteredByGlob("posts/*.md").filter( post => {
            return ( item.data.location ? post : false );
        })
    );
    
    • Add a reference to the collection to our events.md file, showing each event as an entry in a table. Here’s what iterating over a collection looks like:
    <table>
        <thead>
            <tr>
                <th>Date</th>
                <th>Title</th>
                <th>Location</th>
            </tr>    
        </thead>
        <tbody>
            {%- for post in collections.events -%}
            <tr>
                <td>{{ post.date }}</td>
                <td><a href="{{ post.url }}">{{ post.data.title }}</a></td>
                <td>{{ post.data.location }}</td>
            </tr>    
            {%- endfor -%}
        </tbody>
    </table>
    

    However, our date formatting looks pretty bad.

    Table with unformatted dates
    Our date formats could use some work. (Large preview)

    Luckily, the boilerplate .eleventy.js file already has a filter titled readableDate. It’s easy to use filters on content in Markdown files and templates:

    {{ post.date | readableDate }}

    Now, our dates are properly formatted! Eleventy’s filter documentation goes into more depth on what filters are available in the framework, and how you can add your own. (see: commit)

    Polishing The Site Design With CSS

    Okay, so now we have a pretty solid site created. We have pages, hero images, an events list, and a contact form. We’re not constrained by the choice of any theme, so we can do whatever we want with the site’s design… the sky is the limit! It’s up to you to make your site performant, responsive, and aesthetically pleasing. I made some styling and markup changes to get things to our final commit.

    Completed website
    Our website conversion is complete. (Large preview)

    Now we can tell the world about all of our hard work. Let’s publish this site.

    Publishing The Site

    Oh, but wait. It’s already published! We’ve been working in this nice workflow all along, where our updates to GitHub automatically propagate to Netlify and get rebuilt into fresh, fast HTML. Updates are as easy as a git push. Netlify detects the changes from git, processes markdown into HTML, and serves the static site. When you’re done and ready for a custom domain, Netlify lets you use your existing domain for free. Visit Site Settings > Domain Management for all the details, including how you can leverage Netlify’s free HTTPS certificate with your custom domain.

    Advanced: Images, Contact Forms, And Content Management

    This was a simple site with only a few images. You may have a more complicated site, though. Netlify’s Large Media service allows you to upload full-resolution images to GitHub, and stores a pointer to the image in Large Media. That way, your GitHub repository is not jam-packed with image data, and you can easily add markup to your site to request optimized crops and sizes of images at request time. I tried this on my own larger sites and was really happy with the responsiveness and ease of setup.

    Remember that contact form that was installed with our boilerplate? It just works. When you submit the contact form, you’ll see submissions in Netlify’s administration section. Select “Forms” for your site. You can configure Netlify to email you when you get a new form submission, and you can also add a custom confirmation page in your form’s code. Create a page in your site at /contact/success, for example, and then within your form tag (in form.njk), add action="/contact/success" to redirect users there once the form has been submitted.

    The boilerplate also configures the site to be used with Netlify’s content manager. Configuring this to work well for a non-technical person is beyond the scope of the article, but you can define templates and have updates made in Netlify’s content manager sync back to GitHub and trigger automatic redeploys of your site. If you’re comfortable with the workflow of making updates in markdown and pushing them to GitHub, though, this capability is likely something you don’t need.

    Further Reading

    Here are some links to resources used throughout this tutorial, and some other more advanced concepts if you want to dive deeper.

    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    Internationalization And Localization For Static Sites — Smashing Magazine

    11/04/2020

    About The Author

    Sam Richard, better known as Snugug throughout the Internet, is a developer with design tendencies and a love of building open source tools to help with both. …
    More about
    Sam

    Internationalization and localization is more than just writing your content in multiple languages. You need a strategy to determine what localization to send, and code to do it. You need to be able to support not just different languages, but different regions with the same language. Your UI needs to be responsive, not just to screen size, but to different languages and writing modes. Your content needs to be structured, down to the microcopy in your UI and the format of your dates, to be adaptable to any language you throw at it. Doing all of this with a static site generator, like Eleventy, can make it even harder, because you may not have a database, nonetheless a server. It can all be done, though, but it takes planning.

    Internationalization and localization is more than just writing your content in multiple languages. You need a strategy to determine what localization to send, and code to do it. You need to be able to support not just different languages, but different regions with the same language. Your UI needs to be responsive, not just to screen size, but to different languages and writing modes. Your content needs to be structured, down to the microcopy in your UI and the format of your dates, to be adaptable to any language you throw at it. Doing all of this with a static site generator, like Eleventy, can make it even harder, because you may not have a database, nonetheless a server. It can all be done, though, but it takes planning.

    When building out chromeOS.dev, we knew that we needed to make it available to a global audience. Making sure that our codebase could support multiple locales (language, region, or combination of the two) without needing to custom-code each one, while allowing translation to be done with as little of that system’s knowledge as possible, would be critical to making this happen. Our content creators needed to be able to focus on creating content, and our translators on translating content, with as little work as possible to get their work into the site and deployed. Getting these sometimes conflicting set of needs right is the heart of what it takes to internationalize codebases and localize sites.

    Internationalization (i18n) and localization (l10n) are two sides of the same coin. Internationalization is all about how, in our case, software, gets designed so that it can be adapted for multiple languages and regions without needing engineering changes. Localization, on the other hand, is about actually adapting the software for those languages and regions. Internationalization can happen across the whole website stack; from HTML, CSS, and JS to design considerations and build systems. Localization happens mostly in content creation (both long-form copy and microcopy) and management.

    Note: For those curious, i18n and l10n are types of abbreviations known as numeronyms. A11y, for accessibility, is another common numeronym in web development.

    Internationalization (i18n)

    When figuring out internationalization, there are generally three items you need to consider: how to figure out what language and/or region the user wants, how to make sure they get content in their preferred localization, and how to adapt your site to adjust to those differences. While implementation specifics may change for dynamic sites (that render a page when a user requests it) and static sites (where pages are before getting deployed), the core concepts should stay the same.

    Determining User’s Language And Region

    The first thing to consider when figuring out internationalization is to determine how you want users to access localized content. This decision will become foundational to how you set up other systems, so it’s important to decide this early and ensure that the tradeoffs work well for your users.

    Generally, there are three high-level ways of determining what localization to serve to users:

    1. Location from IP address;
    2. Accept-Language header or navigator.languages;
    3. Identifier in URL.

    Many systems wind up combining one, two, or all three, when deciding what localization to serve. As we were investigating, though, we found issues with using IP addresses and Accept-Language headers that we thought were significant enough to remove from consideration for us:

    • A user’s preferred language often doesn’t correlate to their physical location, which IP address provides. Just because someone is physically located in America, for instance, does not mean that they would prefer English content.
    • Location analysis from IP addresses is difficult, generally unreliable, and may prevent the site from being crawled by search engines.
    • Accept-Language headers are often never explicitly set, and only provide information about language, not region. Because of its limitations, this may be helpful to establish an initial guess about language, but isn’t necessarily reliable.

    For these reasons, we decided that it would be better for us to not try and infer language or region before a user lands on our site, but rather have strong indicators in our URLs. Having strong indicators also allows us to assume that they’re getting the site in the language they want from their access URL alone, provides for an easy way to share localized content directly without concern of redirection, and provides a clean way for us to let users switch their preferred language.

    There are three common patterns for building identifiers into URLs:

    1. Provide different domains (usually TLDs or subdomains for different regions and languages (e.g. example.com and example.de, en.example.org and de.example.org);
    2. Have localized sub-directories for content (e.g. example.com/en and example.com/de);
    3. Serve localized content based on URL parameters (e.g. example.com?loc=en and example.com?loc=de).

    While commonly used, URL parameters are generally not recommended because it’s difficult for users to recognize the localization (along with a number of analytics and management issues). We also decided that different domains weren’t a good solution for us; our site is a Progressive Web App and every domain, including TLDs and subdomains, are considered a different origin, effectively requiring a separate PWA for each localization.

    We decided to use subdirectories, which provided a bonus of us being able to localize on language only (example.com/en) or language and region (example.com/en-US and example.com/en-GB) as needed while maintaining a single PWA. We also decided that every localization of our site would live in a subdirectory so one language isn’t elevated above another, and that all URLs, except for the subdirectory, would be identical across localizations based on the authoring language, allowing users to easily change localizations without needing to translate URLs.

    Serving Localized Content

    Once a strategy for determining a user’s language and region has been determined, you need a way to reliably serve them the right content. At a minimum, this will require some form of stored information, be it in a cookie, some local storage, or part of your app’s custom logic. Being able to keep a user’s localization preferences is an important part of i18n user experience; if a user has identified they want content in German, and they land on English content, you should be able to identify their preferred language and redirect them appropriately. This can be done on the server, but the solution we went with for chromeOS.dev is hosting and server setup agnostic: we used service workers. The user’s journey is as follows:

    • A user comes to our site for the first time. Our service worker isn’t installed.
    • Whatever localization they land on we set as their preferred language in IndexedDB. For this, we presume they’re landing there through some means, either social, referral, or search, that has directed them based on other localization contexts we don’t have. If a user lands without a localization set, we set it to English, as that’s our site’s primary language. We also have a language switcher in our footer to allow a user to change their language. At this point, our service worker should be installed.
    • After the service worker is installed, we intercept all URL requests for site navigation. Because our localizations are subdirectory based, we can readily identify what localization is being requested. Once identified, we check if the requested page is in a localized subdirectory, check if the localized subdirectory is in a list of supported localizations, and check if the localized subdirectory matches their preferences stored in IndexedDB. If it’s not in a localized subdirectory or the localized subdirectory matches their preferences, we serve the page; otherwise we do a 302 redirect from our service worker for the right localization.

    We bundled our solution into Workbox plugin, Service Worker Internationalization Redirect. The plugin, along with its preferences sub-module, can be combined to set and get a user’s language preference and manage redirection when combined with Workbox’s registerRoute method and filtering requests on request.mode === 'navigate'.

    A full, minimal example looks like this:

    Client Code
    import { preferences } from 'service-worker-i18n-redirect/preferences';
    window.addEventListener('DOMContentLoaded', async () => {
      const language = await preferences.get('lang');
      if (language === undefined) {
        preferences.set('lang', lang.value); // Language determined from localization user landed on
      }
    });
    
    Service Worker Code
    import { StaleWhileRevalidate } from 'workbox-strategies';
    import { CacheableResponsePlugin } from 'workbox-cacheable-response';
    import { i18nHandler } from 'service-worker-i18n-redirect';
    import { preferences } from 'service-worker-i18n-redirect/preferences';
    import { registerRoute } from 'workbox-routing';
    
    // Create a caching strategy
    const htmlCachingStrategy = new StaleWhileRevalidate({
      cacheName: 'pages-cache',
      plugins: [
        new CacheableResponsePlugin({
          statuses: [200],
        }),
      ],
    });
    
    // Array of supported localizations
    const languages = ['en', 'es', 'fr', 'de', 'ko'];
    
    // Use it for navigations
    registerRoute(
      ({ request }) => request.mode === 'navigate',
      i18nHandler(languages, preferences, htmlCachingStrategy),
    );
    

    With the combination of the client-side and service worker code, users’ preferred localization will automatically get set when they hit the site the first time and, if they navigate to a URL that isn’t in their preferred localizations, they’ll be redirected.

    Adapting Site User Interface

    There is a lot that goes into properly adapting user interfaces, so while not everything will be covered here, there are a handful of more subtle things that can and should be managed programmatically.

    Blockquote Quotes

    A common design pattern is having blockquotes wrapped in quotation marks, but did you know what gets used for those quotation marks varies with localization? Instead of hard-coding, use open-quote and close-quote to ensure the correct quotes are used for the correct language.

    Blockquote from the style guide, using open-quote, close-quote for the quotes at the start and end, on a page with lang=”en”
    open-quote and close-quote for lang=“en” appear as two superscript ticks facing inward towards the text. (Large preview)
    Blockquote from our style guide, using open-quote, close-quote for the quotes at the start and end, on a page with lang=”fr”
    open-quote and close-quote for lang=“fr” appear as pairs of small less than symbols before the text and a pair of small greater than symbols after the text, slightly below center on the text. (Large preview)
    Date And Number Format

    Both dates and numbers have a method, .toLocaleString to allow formatting based on a localization (language and/or region). Browsers that support these ship with all localizations available, making it readily usable there, but Node.js doesn’t. Fortunately, the full-icu module for Node allows you to use all of the localization data available. To do so, after installing the module, run your code with the NODE_ICU_DATA environment variable set to the path to the module, e.g. NODE_ICU_DATA=node_modules/full-icu.

    HTML Meta Information

    There are three areas in your HTML tag and headers that should be updated with each localization:

    • The page’s language,
    • Writing direction,
    • Alternative languages the page is available in.

    The first to go on the html element with the dir and lang properties respectively, e.g. <html lang="en" dir-"ltr"> for US English. Properly setting these will ensure content flows in the right direction and can allow browsers to understand what language the page is in, allowing additional features like translating the content. You should also include rel="alternate" links to let search engines know that a page has been fully translated, so including <link href="http://www.smashingmagazine.com/es" rel="alternate" hreflang="es"> on our English landing page will let search engines know that this has a translation it should be on the lookout for.

    Intrinsic Design

    Localizing content can present design challenges as different translations will take up a varying amount of room on the page. Some languages, like German, have longer words requiring more horizontal space or more forgiving text wrapping. Other languages, like Arabic, have taller typefaces requiring more vertical space. Fortunately, there are a number of CSS tools for making spacing and layout responsive to not just the viewport size, but to the content as well, meaning they better adapt to multiple languages.

    There are a number of CSS units specifically designed for working with content. There are the em and rem units representing the calculated font-size and root font-size, respectively. .Swapping fixed-size px values for these units can go a long way in making a site more responsive to its content. Then there’s the ch unit, representing the inline size of the 0 (zero) glyph in a font. This allows you to tie things like width, for instance, directly to the content it contains.

    These units can then be combined with existing, powerful CSS tools for layout, specifically flexbox and grid, to components that adapt to their size, and layouts adapt to their content. Enhancing those with logical properties for borders, margins, and padding instead of physical physical properties makes those layouts and components automatically adapt to writing mode, too. The power of intrinsic web design (coined by Jen Simmons, content-aware units, and logical properties allows for interfaces to be designed and built so they can adapt to any language, not just any screen size.

    Localization (l10n)

    The most obvious form localization takes is translating content from one language to another. In more subtle forms, translations not only happen by language, but region it’s spoken, for instance, English spoken in American versus English spoken in the United Kingdom, South Africa, or Australia. To be successful here, understanding what to translate and how to structure your content for translation is critical to success.

    Content Strategy

    There are some parts of a software project that are important to localize, and some that aren’t. CSS class names, JavaScript variables, and other places in your codebase that are structural, but not user-facing, probably don’t need to be localized. Figuring out what needs to be localized, and how to structure it, comes down to content strategy.

    Content strategy has a lot of definitions, but here it means the structure of content, microcopy (the words and phrases used throughout a project not tied to a specific piece of content), and the connections thereof. For more detailed information on content strategy, I’d recommend Content Strategy for Mobile by Karen McGrane and Designing Connected Content by Carrie Hane and Mike Atherton.

    For chromeOS.dev, we wound up codifying content models that describe the structure of our content. Content models aren’t just for long-form article-like content; a content model should exist for any entity that a user may specifically want from you, like an author, document, or even reusable media assets. Good content models include individually-addressable pieces, or chunks, of a larger conceptual piece, while excluding chunks that are tangentially related or can be referenced from another content model. For instance, a content model for a blog post may include a title, an array of tags, a reference to an author, the date published, and the body of the post, but it shouldn’t include the string for breadcrumbs, or the author’s name and picture, which should be its own content model. Content models don’t change from localization to localization; they are site structure. An instance of a content model is tied to a localization, and those instances can be localized.

    Content models only cover part of what needs to be localized, though. The rest—your “Read More” buttons, your “Menu” title, your disclaimer text—that’s all microcopy. Microcopy needs structure, too. While content models may feel natural to create, especially for template-driven sites, microcopy models tend to be less obvious and are often overlooked accidentally by writing what’s needed directly in a template.

    By building content and microcopy models and enforcing them—through a content management system, linting, or review—you’re able to ensure that localization can focus on localizing.

    Localize Values, Not Keys

    Content and microcopy models usually generate structures akin to objects in a codebase; be it database entries, JSON object, YAML, or Front Matter. Don’t localize object keys! If you have your Search text microcopy located in a microcopy object at microcopy.search.text, don’t put it in a microcopie object at microcopie.chercher.texte. Keys in modules should be treated as localization-agnostic identifiers so they can be reliably used in reusable templates and relied upon throughout a codebase. This also means that object keys shouldn’t be displayed to end-users as content or microcopy.

    Static Site Setup

    For chromeOS.dev, we used Eleventy (11ty) with Nunjucks as our static site generator, but these recommendations for setting up a static site generator should be applicable to most static site generators. Where something is 11ty specific, it will be called out.

    Folder Structure

    Static site generators that compile based on folder structure are particularly good at supporting the subdirectory i18n method. 11ty also supports a data cascade with global data and a means of generating pages from data through pagination, so combining these three concepts yields a basic folder structure that looks like the following:

    .
    └── pages
       ├── _data
       ├── _generated
       └── {{locale-code}}
          ├── {{locale-code}}.11tydata.js
          ├── _data
          └── [...content]
    

    At a top-level, there’s a directory to hold the pages for a site, here called pages. Nested inside, there’s a _data folder containing global data files. This folder is important when talking about helpers next. Then, there’s a _generated folder. We have a number of pages that, instead of having their own content, are generated from existing content, small amounts of microcopy, or a combination of both. Think home a home page, a search page, or a blog section’s landing page. Because these pages are highly templated, we store the templates in the _generated folder and build them from there instead of having individual HTML or Markdown files for each. These folders are prefixed with an underscore to indicate that they don’t output pages directly underneath them, but rather are used to create pages elsewhere.

    Next, l10n subdirectories! Each directory should be named for the BCP47 language tag (more commonly, locale code) for the localization it contains: for instance, en for English, or en-US for American English. In the chromeOS.dev codebase, we often refer to these as locales, too. These folders will become the localization subdirectories, segmenting content to a localization. 11ty’s data cascade allows for data to be available to every file in a directory and its children if the file is at the root of a directory and named the same as the directory (called directory data files). 11ty uses an object returned from this file, or a function that returns an object, and injects it into the variables made available for templating, so we have access to data here for all content of that localization.

    To aid in maintainability of these files, we wrote a helper called l10n-data, part of our static site scaffolding, that takes advantage of this folder structure to build a cascade of localized data, allowing data to be localized piecemeal. It does this by having data stored in a locale-specific data directory, _data directory in it (loaded into the directory data file). If you look in our English locale data directory, for instance, you’ll see microcopy models like locale.json which defines the language code and writing direction that will then be rendered into our HTML, newsletter.yml which defines the microcopy needed for our newsletter signup, and a microcopy.yml file which includes general microcopy used in multiple places throughout the site that doesn’t fit into a more specific file. Everywhere any of this microcopy gets used, we pull it from this data made available through 11ty injecting data variables into our templates to use.

    Microcopy tends to be the hardest to manage, while the rest of the content is mostly straight forward. Put your content, often Markdown files or HTML, into the localized subfolder. For static site generators that work on folder structure, the file name and folder structure of the content will typically map 1:1 to the final URL for that content, so a Markdown file at en/web/pwas.md would output to a URL en/web/pwa. Following our “values, not keys” principle of localization, we decided that we wouldn’t localize content file names (and therefore paths), making it easier for us to keep track of the same file’s localization status across locales and for users to know they’re on the right page between different locales.

    I18n Helpers

    In addition to content and microcopy, we found we needed to write a number of helpers modules to make working with localized content easier. 11ty has a concept called a filter that allows content to be modified before being rendered. We wound up building four of them to help with i18n templating.

    The first is a date filter. We standardized on having all dates across our content written as a YAML date value because we mostly write them in YAML and they become available in our templates as a full UTC timestamp. When using the full-icu module and config, the date string (content being changed), along with the locale code for the content being rendered, can be passed directly to Date.toLocaleString (with optional formatting options) to render a localized date. Date.toLocaleDateString can optionally be used instead if you just want the date portion when no formatting options are passed in, instead of the full localized date and time.

    The second filter is something we called localURL. This takes a local URL (content being changed) and the locale the URL should be in, and swaps them. It changes, for example, /en/linux to /es/linux.

    The final two filters are about retrieving localized information from locale code alone. The third leverages the iso-639-10 module to transform a locale code into language name in the native language. This we use primarily for our language selector. The fourth uses the iso-i18n-countries module to retrieve a list of countries in that language. This we use primarily for building forms with country lists.

    In addition to filters, 11ty has a concept called collections which is a grouping of content. 11ty makes a number of collections available by default, and can even build collections off of tags. In a multilingual site, we found that we wanted to build custom collections. We wound up building a number of helper functions to build collections based on localization. This allows us to do things like have location-specific tag collections or site section collections without needing to filter in our templates against all content on our site.

    Our final, and most critical, helper was our site global data. Relying on the locale-code based subdirectory structure, this function dynamically determines what localizations the site supports. It builds a global variable, site, which includes the l10n property, containing all of the microcopy and localization-specific content from {{locale-code}}.11tydata.js. It also contains a languages property that lists all of the available locales as an array. Finally, the function outputs a JavaScript file detailing what languages are supported by the site and individual files for each entry in {{locale-code}}.11tydata.js, keyed per localization, all designed to be imported by our browser scripts. The heavy lifting of this file ties our static site to our front-end JavaScript with the single source of truth being the localization information we already need. It also allows us to programmatically generate pages based on our localizations by looping over site.l10n. This, combined with our localization-specific collections, let us use 11ty’s pagination to create localized home and news landing pages without maintaining separate HTML pages for each.

    Conclusion

    Getting internationalization and localization right can be difficult; understanding how different strategies and affect complexity is critical to making it easier. Pick an i18n strategy that is a natural fit for static sites, subdirectories, then build tools off that to automate parts of i18n and i10n from the content being produced. Build robust content and microcopy models. Leverage service workers for server-agnostic localization. Tie it all together with a design that’s responsive not just to screen size, but content. In the end you’ll have a site that your users of all locales will love that can be maintained by authors and translators as if it were a simple single-locale site.

    Smashing Editorial
    (ra, il)

    Source link

    web design

    Simplify Your Stack With A Custom-Made Static Site Generator — Smashing Magazine

    09/23/2020

    About The Author

    Bryan is a designer, developer, and educator with a passion for CSS and static sites. He actively works to mentor and teach developers and designers the value …
    More about
    Bryan
    Robinson

    In modern development, there are so many great tools for developing websites, but often they are more than what’s necessary for a given project. In this article, we’ll explore how to take a humble HTML page and make its content editable in a CMS with no frameworks and no client-side JavaScript.

    With the advent of the Jamstack movement, statically-served sites have become all the rage again. Most developers serving static HTML aren’t authoring native HTML. To have a solid developer experience, we often turn to tools called Static Site Generators (SSG).

    These tools come with many features that make authoring large-scale static sites pleasant. Whether they provide simple hooks into third-party APIs like Gatsby’s data sources or provide in-depth configuration like 11ty’s huge collection of template engines, there’s something for everyone in static site generation.

    Because these tools are built for diverse use cases, they have to have a lot of features. Those features make them powerful. They also make them quite complex and opaque for new developers. In this article, we’ll take the SSG down to its basic components and create our very own.

    What Is A Static Site Generator?

    At its core, a static site generator is a program that performs a series of transformations on a group of files to convert them into static assets, such as HTML. What sort of files it can accept, how it transforms them, and what types of files come out differentiate SSGs.

    Jekyll, an early and still popular SSG, uses Ruby to process Liquid templates and Markdown content files into HTML.

    Gatsby uses React and JSX to transform components and content into HTML. It then goes a step further and creates a single-page application that can be served statically.

    11ty renders HTML from templating engines such as Liquid, Handlebars, Nunjucks, or JavaScript template literals.

    Each of these platforms has additional features to make our lives easier. They provide theming, build pipelines, plugin architecture, and more. With each additional feature comes more complexity, more magic, and more dependencies. They’re important features, to be sure, but not every project needs them.

    Between these three different SSGs, we can see another common theme: data + templates = final site. This seems to be the core functionality of generator static sites. This is the functionality we’ll base our SSG around.

    At its core, a static site generator is a program that performs a series of transformations on a group of files to convert them into static assets, such as HTML.

    Our New Static Site Generator’s Technology Stack: Handlebars, Sanity.io And Netlify

    To build our SSG, we’ll need a template engine, a data source, and a host that can run our SSG and build our site. Many generators use Markdown as a data source, but what if we took it a step further and natively connected our SSG to a CMS?

    • Data Source: Sanity.io
    • Data fetching and templating: Node and Handlebars
    • Host and Deployment: Netlify.

    Prerequisites

    • NodeJS installed
    • Sanity.io account
    • Knowledge of Git
    • Basic knowledge of command line
    • Basic knowledge of deployment to services like Netlify.

    Note: To follow along, you can find the code in this repository on GitHub.

    Setting Up Our Document Structure In HTML

    To start our document structure, we’re going to write plain HTML. No need to complicate matters yet.

    In our project structure, we need to create a place for our source files to live. In this case, we’ll create a src directory and put our index.html inside.

    In index.html, we’ll outline the content we want. This will be a relatively simple about page.

    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <title>Title of the page!</title>
    </head>
    <body>
        <h1>The personal homepage of Bryan Robinson</h1>
    
        <p>Some pagraph and rich text content next</p>
    
        <h2>Bryan is on the internet</h2>
        <ul>
            <li><a href="linkURL">List of links</a></li>
        </ul>
    </body>
    </html>

    Let’s keep this simple. We’ll start with an h1 for our page. We’ll follow that with a few paragraphs of biographical information, and we’ll anchor the page with a list of links to see more.

    Convert Our HTML Into A Template That Accepts Data

    After we have our basic structure, we need to set up a process to combine this with some amount of data. To do this, we’ll use the Handlebars template engine.

    At its core, Handlebars takes an HTML-like string, inserts data via rules defined in the document, and then outputs a compiled HTML string.

    To use Handlebars, we’ll need to initialize a package.json and install the package.

    Run npm init -y to create the structure of a package.json file with some default content. Once we have this, we can install Handlebars.

    npm install handlebars

    Our build script will be a Node script. This is the script we’ll use locally to build, but also what our deployment vendor and host will use to build our HTML for the live site.

    To start our script, we’ll create an index.js file and require two packages at the top. The first is Handlebars and the second is a default module in Node for accessing the current file system.

    const fs = require('fs');
    const Handlebars = require('handlebars');

    We’ll use the fs module to access our source file, as well as to write to a distribution file. To start our build, we’ll create a main function for our file to run when called and a buildHTML function to combine our data and markup.

    function buildHTML(filename, data) {
      const source = fs.readFileSync(filename,'utf8').toString();
      const template = Handlebars.compile(source);
      const output = template(data);
    
      return output
    }
    
    async function main(src, dist) {
      const html = buildHTML(src, { "variableData": "This is variable data"});
     
      fs.writeFile(destination, html, function (err) {
        if (err) return console.log(err);
          console.log('index.html created');
      });
    }
    
    main('./src/index.html', './dist/index.html');

    The main() function accepts two arguments: the path to our HTML template and the path we want our built file to live. In our main function, we run buildHTML on the template source path with some amount of data.

    The build function converts the source document into a string and passes that string to Handlebars. Handlebars compiles a template using that string. We then pass our data into the compiled template, and Handlebars renders a new HTML string replacing any variables or template logic with the data output.

    We return that string into our main function and use the writeFile method provided by Node’s file-system module to write the new file in our specified location if the directory exists.

    To prevent an error, add a dist directory into your project with a .gitkeep file in it. We don’t want to commit our built files (our build process will do this), but we’ll want to make sure to have this directory for our script.

    Before we create a CMS to manage this page, let’s confirm it’s working. To test, we’ll modify our HTML document to use the data we just passed into it. We’ll use the Handlebars variable syntax to include the variableData content.

    <h1>{{ variableData }}</h1>

    Now that our HTML has a variable, we’re ready to run our node script.

    node index.js

    Once the script finishes, we should have a file at /dist/index.html. If we read open this in a browser, we’ll see our markup rendered, but also our “This is variable data” string, as well.

    Connecting To A CMS

    We have a way of putting data together with a template, now we need a source for our data. This method will work with any data source that has an API. For this demo, we’ll use Sanity.io.

    Sanity is an API-first data source that treats content as structured data. They have an open-source content management system to make managing and adding data more convenient for both editors and developers. The CMS is what’s often referred to as a “Headless” CMS. Instead of a traditional management system where your data is tightly coupled to your presentation, a headless CMS creates a data layer that can be consumed by any frontend or service (and possibly many at the same time).

    Sanity is a paid service, but they have a “Standard” plan that is free and has all the features we need for a site like this.

    Setting Up Sanity

    The quickest way to get up and running with a new Sanity project is to use the Sanity CLI. We’ll start by installing that globally.

    npm install -g @sanity/cli

    The CLI gives us access to a group of helpers for managing, deploying, and creating. To get things started, we’ll run sanity init. This will run us through a questionnaire to help bootstrap our Studio (what Sanity calls their open-source CMS).

    Select a Project to Use:
       Create new project
       HTML CMS
    
    Use the default dataset configuration?   
       Y // this creates a "Production" dataset
    
    Project output path:
       studio // or whatever directory you'd like this to live in
    
    Select project template
       Clean project with no predefined schemas

    This step will create a new project and dataset in your Sanity account, create a local version of Studio, and tie the data and CMS together for you. By default, the studio directory will be created in the root of our project. In larger-scale projects, you may want to set this up as a separate repository. For this project, it’s fine to keep this tied together.

    To run our Studio locally, we’ll change the directory into the studio directory and run sanity start. This will run Studio at localhost:3333. When you log in, you’ll be presented with a screen to let you know you have “Empty schema.” With that, it’s time to add our schema, which is how our data will be structured and edited.

    Creating Sanity Schema

    The way you create documents and fields within Sanity Studio is to create schemas within the schemas/schema.js file.

    For our site, we’ll create a schema type called “About Details.” Our schema will flow from our HTML. In general, we could make most of our webpage a single rich-text field, but it’s a best practice to structure our content in a de-coupled way. This provides greater flexibility in how we might want to use this data in the future.

    For our webpage, we want a set of data that includes the following:

    • Title
    • Full Name
    • Biography (with rich text editing)
    • A list of websites with a name and URL.

    To define this in our schema, we create an object for our document and define out its fields. An annotated list of our content with its field type:

    • Title — string
    • Full Name — string
    • Biography — array of “blocks”
    • Website list — array of objects with name and URL string fields.
    types: schemaTypes.concat([
        /* Your types here! */
    
        {
            title: "About Details",
            name: "about",
            type: "document",
            fields: [
                {
                    name: 'title',
                    type: 'string'
                },
                {
                    name: 'fullName',
                    title: 'Full Name',
                    type: 'string'
                },
                {
                    name: 'bio',
                    title: 'Biography',
                    name: 'content',
                    type: 'array',
                    of: [
                        {
                            type: 'block'
                        }
                    ]
                },
                {
                    name: 'externalLinks',
                    title: 'Social media and external links',
                    type: 'array',
                    of: [
                        {
                            type: 'object',
                            fields: [
                                { name: 'text', title: 'Link text', type: 'string' },
                                { name: 'href', title: 'Link url', type: 'string' }
                            ]
                        }
                    ]
                }
            ]
        }
    ])

    Add this to your schema types, save and your Studio will recompile and present you with your first documents. From here, we’ll add our content into the CMS by creating a new document and filling out the information.

    Structuring Your Content In A Reusable Way

    At this point, you may be wondering why we have a “full name” and a “title.” This is because we want our content to have the potential to be multipurpose. By including a name field instead of including the name just in the title, we give that data more use. We can then use information in this CMS to also power a resumé page or PDF. The biography field could be programmatically used in other systems or websites. This allows us to have a single source of truth for much of this content instead of being dictated by the direct use case of this particular site.

    Pulling Our Data Into Our Project

    Now that we’ve made our data available via an API, let’s pull it into our project.

    Install and configure the Sanity JavaScript client

    First thing, we need access to the data in Node. We can use the Sanity JavaScript client to forge that connection.

    npm install @sanity/client

    This will fetch and install the JavaScript SDK. From here, we need to configure it to fetch data from the project we set up earlier. To do that, we’ll set up a utility script in /utils/SanityClient.js. We provide the SDK with our project ID and dataset name, and we’re ready to use it in our main script.

    const sanityClient = require('@sanity/client');
    const client = sanityClient({
        projectId: '4fs6x5jg',
        dataset: 'production',
        useCdn: true 
      })
    
    module.exports = client;

    Fetching Our Data With GROQ

    Back in our index.js file, we’ll create a new function to fetch our data. To do this, we’ll use Sanity’s native query language, the open-source GROQ.

    We’ll build the query in a variable and then use the client that we configured to fetch the data based on the query. In this case, we build an object with a property called about. In this object, we want to return the data for our specific document. To do that, we query based on the document _id which is generated automatically when we create our document.

    To find the document’s _id, we navigate to the document in Studio and either copy it from the URL or move into “Inspect” mode to view all the data on the document. To enter Inspect, either click the “kabob” menu at the top-right or use the shortcut Ctrl + Alt + I. This view will list out all the data on this document, including our _id. Sanity will return an array of document objects, so for simplicity’s sake, we’ll return the 0th entry.

    We then pass the query to the fetch method of our Sanity client and it will return a JSON object of all the data in our document. In this demo, returning all the data isn’t a big deal. For bigger implementations, GROQ allows for an optional “projection” to only return the explicit fields you want.

    const client = require('./utils/SanityClient') // at the top of the file
    
    // ...
    
    async function getSanityData() {
        const query = `{
            "about": *[_id == 'YOUR-ID-HERE'][0]
        }`
        let data = await client.fetch(query);
    }

    Converting The Rich Text Field To HTML

    Before we can return the data, we need to do a transformation on our rich text field. While many CMSs use rich text editors that return HTML directly, Sanity uses an open-source specification called Portable Text. Portable Text returns an array of objects (think of rich text as a list of paragraphs and other media blocks) with all the data about the rich text styling and properties like links, footnotes, and other annotations. This allows for your text to be moved and used in systems that don’t support HTML, like voice assistants and native apps.

    For our use case, it means we need to transform the object into HTML. There are NPM modules that can be used to convert portable text into various uses. In our case we’ll use a package called block-content-to-html.

    npm install @sanity/block-content-to-html

    This package will render all the default markup from the rich text editor. Each type of style can be overridden to conform to whatever markup you need for your use case. In this case, we’ll let the package do the work for us.

    const blocksToHtml = require('@sanity/block-content-to-html'); // Added to the top
    
    async function getSanityData() {
        const query = `{
            "about": *[_type == 'about'][0]
        }`
        let data = await client.fetch(query);
        data.about.content = blocksToHtml({
            blocks: data.about.content
        })
        return await data
    }

    Using The Content From Sanity.io In Handlebars

    Now that the data is in a shape we can use it, we’ll pass this to our buildHTML function as the data argument.

    async function main(src, dist) {
        const data = await getSanityData();
        const html = buildHTML(src, data)
    
        fs.writeFile(dist, html, function (err) {
            if (err) return console.log(err);
            console.log('index.html created');
        });
    }

    Now, we can change our HTML to use the new data. We’ll use more variable calls in our template to pull most of our data.

    To render our rich text content variable, we’ll need to add an extra layer of braces to our variable. This will tell Handlebars to render the HTML instead of displaying the HTML as a string.

    For our externalLinks array, we’ll need to use Handlebars’ built-in looping functionality to display all the links we added to our Studio.

    <!DOCTYPE html>
    <html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
        <title>{{ about.title }}</title>
    </head>
    <body>
        <h1>The personal homepage of {{ about.fullName }}</h1>
    
        {{{ about.content }}}
    
        <h2>Bryan is on the internet</h2>
        <ul>
            {{#each about.externalLinks }}
                <li><a href="{{ this.href }}">{{ this.text }}</a></li>
            {{/each}}
        </ul>
    </body>
    </html>

    Setting Up Deployment

    Let’s get this live. We need two components to make this work. First, we want a static host that will build our files for us. Next, we need to trigger a new build of our site when content is changed in our CMS.

    Deploying To Netlify

    For hosting, we’ll use Netlify. Netlify is a static site host. It serves static assets, but has additional features that will make our site work smoothly. They have a built-in deployment infrastructure that can run our node script, webhooks to trigger builds, and a globally distributed CDN to make sure our HTML page is served quickly.

    Netlify can watch our repository on GitHub and create a build based on a command that we can add in their dashboard.

    First, we’ll need to push this code to GitHub. Then, in Netlify’s Dashboard, we need to connect the new repository to a new site in Netlify.

    Once that’s hooked up, we need to tell Netlify how to build our project. In the dashboard, we’ll head to Settings > Build & Deploy > Build Settings. In this area, we need to change our “Build command” to “node index.js” and our “Publish directory” to “./dist”.

    When Netlify builds our site, it will run our command and then check the folder we list for content and publish the content inside.

    Setting Up A Webhook

    We also need to tell Netlify to publish a new version when someone updates content. To do that, we’ll set up a Webhook to notify Netlify that we need the site to rebuild. A Webhook is a URL that can be programmatically accessed by a different service (such as Sanity) to create an action in the origin service (in this case Netlify).

    We can set up a specific “Build hook” in our Netlify dashboard at Settings > Build & Deploy > Build hooks. Add a hook, give it a name and save. This will provide a URL that can be used to remotely trigger a build in Netlify.

    Next, we need to tell Sanity to make a request to this URL when you publish changes.

    We can use the Sanity CLI to accomplish this. Inside of our /studio directory, we can run sanity hook create to connect. The command will ask for a name, a dataset, and a URL. The name can be whatever you’d like, the dataset should be production for our product, and the URL should be the URL that Netlify provided.

    Now, whenever we publish content in Studio, our website will automatically be updated. No framework necessary.

    Next Steps

    This is a very small example of what you can do when you create your own tooling. While more full-featured SSGs may be what you need for most projects, creating your own mini-SSG can help you understand more about what’s happening in your generator of choice.

    • This site publishes only one page, but with a little extra in our build script, we could have it publish more pages. It could even publish a blog post.
    • The “Developer experience” is a little lacking in the repository. We could run our Node script on any file saves by implementing a package like Nodemon or add “hot reloading” with something like BrowserSync.
    • The data that lives in Sanity can power multiple sites and services. You could create a resumé site that uses this and publishes a PDF instead of a webpage.
    • You could add CSS and make this look like a real site.
    Smashing Editorial
    (ra, yk, il)

    Source link

    web design

    Creating A Static Blog With Sapper And Strapi — Smashing Magazine

    08/05/2020

    About The Author

    Daniel Madalitso Phiri is a Developer, Writer, Builder of Wacky things, DJ, Lorde superfan and Community Builder from Lusaka, Zambia.
    More about
    Daniel

    This article will take you through how to build a Svelte-powered static blog with Sapper and Strapi, as well as how to deploy the website to Netlify. You’ll understand how to build a static website, as well as use the power of a headless CMS, with a real-world example.

    In this tutorial, we will build a statically generated minimal blog with Sapper, a Svelte-based progressive JavaScript framework, for our front end, and then use Strapi, an open-source headless content management system (CMS), for the back end of our application. This tutorial is aimed at intermediate front-end developers, specifically those who want the versatility of a headless CMS, like Strapi, as well as the minimal structure of a JavaScript framework, like Sapper. Feel free to try out the demo or check out the source code on GitHub.

    To go through the article smoothy, you will need the LTS version of Node.js and either Yarn or npm installed on your device beforehand. It’s also worth mentioning that you will need to have a basic understanding of JavaScript and GraphQL queries.

    Before getting started, let’s get some definitions out of the way. A static-site generator is a tool that generates static websites, and a static website can be defined as a website that is sourced from purely static HTML files. For an overview of your options for static-site generators today, check out “Top 10 Static Site Generators in 2020”.

    A headless CMS, on the other hand, is a CMS accessible via an API. Unlike the traditional CMS’ of the past, a headless CMS is front-end agnostic and doesn’t tie you to a single programming language or platform. Strapi’s article “Why Frontend Developers Should Use a Headless CMS” is good resource to understand the usefulness of a headless CMS.

    Static-site generators, like headless CMS’, are quickly gaining mainstream appeal in the front-end web development community. Both pieces of technology bring with them a much lower barrier to entry, flexibility, and a generally better developer experience. We’ll see all this and more as we build our blog.

    You might be wondering, “Why should I use this instead of the alternatives?” Sapper is based on Svelte, which is known for its speed and relatively small bundle size. In a world where performance plays a huge role in determining an effective user experience, we want to optimize for that. Developers today are spoiled for choice when it comes to front-end frameworks — if we want to optimize for speed, performance, and developer experience (like I do in this project), then Sapper is a solid choice!

    So, let’s get started building our minimal blog, starting with our Sapper front end.

    Sapper Front End

    Our front end is built with Sapper, a framework for building extremely high-performance web apps using Svelte. Sapper, which is short for “Svelte app maker”, enables developers to export pages as a static website, which we will be doing today. Svelte has a very opinionated way of scaffolding projects, using Degit.

    “Degit makes copies of Git repositories and fetches the latest commit in the repository. This is a more efficient approach than using git clone, because we’re not downloading the entire Git history.”

    First, install Degit by running npm install -g degit in your command-line interface (CLI).

    Next up, run the following commands in the CLI to set up our project.

    npx degit "sveltejs/sapper-template#rollup" frontend
    # or: npx degit "sveltejs/sapper-template#webpack" frontend
    cd frontend
    npm install
    npm run dev
    

    Note: We have the option of using either Rollup or Webpack to bundle our project. For this tutorial, we will be using Rollup.

    These commands scaffold a new project in the frontend directory, install its dependencies, and start a server on localhost.

    If you’re new to Sapper, the directory structure will need some explaining.

    Sapper’s App Structure

    If you look in the project directory, you’ll see this:

    ├ package.json
    ├ src
    │ ├ routes
    │ │ ├ # your routes here
    │ │ ├ _error.svelte
    │ │ └ index.svelte
    │ ├ client.js
    │ ├ server.js
    │ ├ service-worker.js
    │ └ template.html
    ├ static
    │ ├ # your files here
    └ rollup.config.js / webpack.config.js
    

    Note: When you first run Sapper, it will create an additional __sapper__ directory containing generated files. You’ll also notice a few extra files and a cypress directory — we don’t need to worry about those for this article.

    You will see a few files and folders. Besides those already mentioned above, these are some you can expect:

    • package.json
      This file contains your app’s dependencies and defines a number of scripts.
    • src
      This contains the three entry points for your app: src/client.js, src/server.js, and (optionally) src/service-worker.js, along with a src/template.html file.
    • src/routes
      This is the meat of the app (that is, the pages and server routes).
    • static
      This is a place to put any files that your app uses: fonts, images, and so on. For example, static/favicon.png will be served as /favicon.png.
    • rollup.config.js
      We’re using Rollup to bundle our app. You probably won’t need to change its configuration, but if you want to, this is where you would do it.

    The directory structure is pretty minimal for the functionality that the project provides. Now that we have an idea of what our project directory looks like and what each file and folder does, we can run our application with npm run dev.

    You should see the Svelte-eque starter home page of our blog.

    A screenshot of the Sapper Starter webpage.
    Your Sapper home page. (Large preview)

    This looks really good! Now that our front end is set up and working, we can move on to the back end of the application, where we will set up Strapi.

    Strapi Back End

    Strapi is both headless and self-hosted, which means we have control over our content and where it’s hosted — no server, language, or vendor lock-in to worry about, and we can keep our content private. Strapi is built with JavaScript and has a content editor built with React. We’ll use this content editor to create some content models and store actual content that we can query later on. But before we can do all of this, we have to set it up by following the instructions below.

    1. Install Strapi and Create New Project

    • Open your CLI.
    • Run yarn create strapi-app backend --quickstart. This will create a new folder named backend and build the React admin UI.

    2. Create Administrator

    A screenshot of the Strapi register screen.
    Create an admin account. (Large preview)

    3. Create Blog Collection Type

    • Navigate to “Content-Types Builder”, under “Plugins” in the left-hand menu.
    • Click the “+ Create new collection type” link.
    • Name it “blog”.
    • Click “Continue”.
    A screenshot of the Strapi dashboard - creating a new collection type
    Create a new collection type. (Large preview)
    • Add a “Text field” (short text), and name it “Title”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - creating a new text field
    Create a new Text field. (Large preview)
    • Add a “Text field” (long text), and name it “Description”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - creating a new text field
    Create a new Text field. (Large preview)
    • Add a “Date field” of the type “date”, and name it “Published”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - creating a new date field
    Create a new Date field. (Large preview)
    • Add a “Rich Text field”, and name it “Body”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - creating a new rich text field
    Create a new Rich Text field. (Large preview)
    • Add another “Text field” (short text), and name it “Slug”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - adding a new text field
    Create a new Text field. (Large preview)
    • Add a “Relation field”.
    • On the right side of the relation, click on the arrow and select “User”.
    • On the left side of the relation, change the field name to “author”.
    A screenshot of the Strapi dashboard - creating a new relation
    Create a new Relation field. (Large preview)
    • Click the “Finish” button.
    • Click the “Save” button, and wait for Strapi to restart.

    When it’s finished, your collection type should look like this:

    A screenshot of the Blog collection type showing all its fields
    Overview of your Blog collection type. (Large preview)

    4. Add a New User to “Users” Collection Type

    • Navigate to “Users” under “Collection Types” in the left-hand menu.
    • Click “Add new user”.
    • Enter your desired “Email”, “Username”, and “Password”, and toggle the “Confirmed” button.
    • Click “Save”.
    A screenshot of the User collection type with the 'add new user' button highlighted
    Add some user content. (Large preview)

    Now we have a new user who we can attribute articles to when adding articles to our “Blog” collection type.

    5. Add Content to “Blogs” Collection Type

    • Navigate to “Blogs” under “Collection Types” in the left-hand menu.
    • Click “Add new blog”.
    • Fill in the information in the fields specified (you have the option to select the user whom you just created as an author).
    • Click “Save”.
    A screenshot of the Blog collection type with the 'add new blog' button highlighted
    Add some blog content. (Large preview)

    6. Set Roles and Permissions

    • Navigate to “Roles and Permissions” under “Plugins” in the left-hand menu.
    • Click the “Public” role.
    • Scroll down under “Permissions”, and find “Blogs”.
    • Tick the boxes next to “find” and “findone”.
    • Click “Save”.
    A screenshot of the Strapi Permissions page with the find and findone actions highlighted
    Set permissions for your Public role. (Large preview)

    7. Send Requests to the Collection Types API

    Navigate to https://localhost:1337/blog to query your data.

    You should get back some JSON data containing the content that we just added. For this tutorial, however, we will be using Strapi’s GraphQL API.

    To enable it:

    • Open your CLI.
    • Run cd backend to navigate to ./backend.
    • Run yarn strapi install graphql to install the GraphQL plugin.

    Alternatively, you can do this:

    • In the admin UI, navigate to “Marketplace” under “General” in the left-hand menu.
    • Click “Download” on the GraphQL card.
    • Wait for Strapi to restart.
    A screenshot of the Strapi Marketplace with the download button on the GraphQL plugin highlighted
    Download the GraphQL plugin. (Large preview)

    When the GraphQL plugin is installed and Strapi is back up and running, we can test queries in the GraphQL playground.

    That is all for our back-end setup. All that’s left for us to do is consume the GraphQL API and render all of this beautiful content.

    Piecing Together Both Ends

    We’ve just queried our Strapi back end and gotten back some data. All we have to do now is set up our front end to render the content that we get from Strapi via the GraphQL API. Because we are using the Strapi GraphQL, we will have to install the Svelte Apollo client and a few other packages to make sure everything works properly.

    Installing Packages

    • Open the CLI, and navigate to ./frontend.
    • Run npm i --save apollo-boost graphql svelte-apollo moment.

    Moment.js helps us to parse, validate, manipulate, and display dates and times in JavaScript.

    The packages are now installed, which means we are able to make GraphQL queries in our Svelte app. The blog we’re building will have three pages: “home”, “about” and “articles”. All of our blog posts from Strapi will be displayed on the “articles” page, giving users access to each article. If we think about how that would look, our “articles” page’s route will be /articles, and then each article’s route will be /articles/:slug, where slug is what we enter in the “Slug” field when adding the content in the admin UI.

    This is important to understand because we will tailor our Svelte app to work in the same way.

    In./frontend/src/routes, you will notice a folder named “blog”. We don’t need this folder in this tutorial, so you can delete it. Doing so will break the app, but don’t worry: It’ll be back up and running once we make our “articles” page, which we’ll do now.

    • Navigate to./frontend/src/routes.
    • Create a folder named “articles”.
    • In./frontend/src/routes/articles, create a file named index.svelte, and paste the following code in it.
    • When pasting, be sure to replace <Your Strapi GraphQL Endpoint> with your actual Strapi GraphQL endpoint. For your local version, this will usually be https://localhost:1337/graphql.
    <script context="module">
            import ApolloClient, { gql } from 'apollo-boost';  
            import moment from 'moment';
    
            const blogQuery = gql`
            query Blogs {  
                    blogs {
                            id
                            Title
                            Description
                            Published
                            Body
                            author {
                                    username
                            }
                            Slug
                    }
            }
            `;
            export async function preload({params, query}) {
                    const client = new ApolloClient({ 
                            uri: '<Your Strapi GraphQL Endpoint>',
                            fetch: this.fetch
                             });
                    const results = await client.query({
                            query: blogQuery
                    })
                    return {posts: results.data.blogs}
            }
    </script>
    
    <script>
            export let posts;
    </script>
    
    <style>
            ul, p {
                    margin: 0 0 1em 0;
                    line-height: 1.5;
            }
            .main-title {
                    font-size: 25px;
            }
    </style>
    
    <svelte:head>
            <title>articles</title>
    </svelte:head>
    
    <h1>recent posts</h1>
    
    <ul>
    {#each posts as post}
        <li>
              <a class="main-title" rel='prefetch' href='articles/{post.Slug}'>
                {post.Title}
              </a>
        </li>
        <p> 
      {moment().to(post.Published, "DD-MM-YYYY")} ago by {post.author.username} 
        </p>
    {/each}
    </ul>
    

    This file represents our /articles route. In the code above, we’ve imported a few packages and then used Apollo Client to make a query: blogQuery. We then stored our query response in a variable, results, and used the preload() function to process the data needed on our page. The function then returns posts, a variable with the parsed query result.

    We’ve used Svelte’s #each block to loop through the data from Strapi, displaying the title, date of publication, and author. Our <a> tag, when clicked, goes to a page defined by the slug that we entered for our post in Strapi’s admin UI. This means that when the link is clicked, we open up a page for a particular article, and the slug is used to identify that article.

    For our /articles/:slug route, create a file named [slug].svelte, in ./src/routes/articles, and paste the following code:

    <script context="module">
            import ApolloClient, { gql } from 'apollo-boost';  
            import moment from 'moment';
    
            const blogQuery = gql`
            query Blogs($Slug: String!) {
                    blogs: blogs(where: { Slug: $Slug }) {
                            id
                            Title
                            Description
                            Published
                            Body
                            author {
                                    username
                            }
                            Slug
                    }
                    }
            `;
            export async function preload({params, query}) {
                    const client = new ApolloClient({ 
                            uri: '<Your Strapi GraphQL Endpoint>',
                            fetch: this.fetch
                             });
                    const results = await client.query({
                            query: blogQuery,
                            variables: {"Slug" : params.slug} 
                    })
                    return {post: results.data.blogs}
            }
    </script>
    
    <script>
            export let post;
    </script>
    
    <style>
            .content :global(h2) {
                    font-size: 1.4em;
                    font-weight: 500;
            }
            .content :global(pre) {
                    background-color: #f9f9f9;
                    box-shadow: inset 1px 1px 5px rgba(0,0,0,0.05);
                    padding: 0.5em;
                    border-radius: 2px;
                    overflow-x: auto;
            }
            .content :global(pre) :global(code) {
                    background-color: transparent;
                    padding: 0;
            }
            .content :global(ul) {
                    line-height: 1.5;
            }
            .content :global(li) {
                    margin: 0 0 0.5em 0;
            }
    </style>
    
    <svelte:head>
            <title>an amazing article</title>
    </svelte:head>
    
    {#each post as post}
                    <h2>{post.Title}</h2>
                    <h3>{moment().to(post.Published)} by {post.author.username}</h3>
    
                    <div class='content'>
                    {@html post.Body} </div>
    
    {/each}
    
    <p>⇺<a href="articles"> back to articles</a></p>
    

    Note: In Svelte, dynamic parameters are encoded using [brackets]. Our [slug].svelte file lets us add routes for different posts dynamically.

    Just like in routes/articles/index.svelte, here we’ve imported a few packages, and then used Apollo Client to make a query: blogQuery. This query is different because we’re filtering our data to make sure it returns a specific blog post. The params argument in our preload() function lets us access params.slug, which is the slug of the current page (that is, the slug of this particular blog post). We used params.slug as a variable in our GraphQL query so that only the data with a slug matching the slug of our web page is returned. We then stored our query response in a variable (results), and our preload() function returns posts, a variable with the parsed query result.

    Finally, we displayed our post’s title, publication date, and body (wrapped in Svelte’s {@html} tag).

    That’s it. We can now dynamically display pages for any posts added to Strapi’s back end.

    We can now work on the “about” and “home” pages. In ./frontend/src/routes, paste this code in the about.svelte file:

    <svelte:head>
            <title>about</title>
    </svelte:head>
    
    <h1>about this site</h1>
    
    <p>
    minimalist web design really let's the content stand out and shine. 
    this is why a simple website design is the first choice of so many artists, photographers, 
    and even some writers. they want their creative content to be the center of attention, 
    rather than design elements created by someone else. 
    </p>
    
    <p>this minimal blog is built with <a href="https://svelte.dev/">svelte</a> and <a href="https://strapi.io/">strapi</a> 
    images by <a href="https://unsplash.com/@glencarrie">glen carrie</a> from unsplash 
    </p>
    

    For our home page, let’s go to ./frontend/src/routes and paste the following code in index.svelte:

    <style>
            h1, figure, p {
                    text-align: center;
                    margin: 0 auto;
            }
            h1 {
                    font-size: 2.8em;
                    font-weight: 400;
                    margin: 0 0 0.5em 0;
            }
            figure {
                    margin: 0 0 1em 0;
            }
            img {
                    width: 100%;
                    max-width: 400px;
                    margin: 0 0 1em 0;
            }
            p {
                    margin: 1em auto;
                    padding-bottom: 1em;
            }
            @media (min-width: 480px) {
                    h1 {
                            font-size: 4em;
                    }
            }
    </style>
    
    <svelte:head>
            <title>a minimal sapper blog</title>
    </svelte:head>
    <p>welcome to</p>
    <h1>the<b>blog.</b></h1>
    
    <figure>
            <img alt='the birds on a line' src="http://www.smashingmagazine.com/bird-bg.png">
            <figcaption>where less is more</figcaption>
    </figure>
    
    <p>
    <strong>
    we're minimal and that might seem boring, except you're actually paying attention.
    </strong>
    </p>
    <p class="link"><a href="about">find out why</a>...</p>
    

    We’ve created all the pages needed in order for our app to run as expected. If you run the app now, you should see something like this:

    A screenshot of the minimal blog home page
    Your finished minimal blog home page. (Large preview)

    Pretty sweet, yeah?

    Locally, everything works great, but we want to deploy our static blog to the web and share our beautiful creation. Let’s do that.

    Deploy To Netlify

    We’re going to deploy our application to Netlify, but before we can do that, log into your Netlify account (or create an account, if you don’t already have one). Sapper gives us the option to deploy a static version of our website, and we’ll do just that.

    • Navigate to ./frontend.
    • Run npm run export to export a static version of the application.

    Your application will be exported to ./frontend/sapper/export.

    Drag your exported folder into Netlify, and your website will be live in an instant.

    The Netlify Dashboard
    Drag your export folder to the Netlify Dashboard. (Large preview)

    Optionally, we can deploy our website from Git by following Netlify’s documentation. Be sure to add npm run export as the build command and __sapper__/export as the base directory.

    We also have the option to deploy to with Vercel (formally ZEIT, as mentioned in Sapper’s documentation).

    Conclusion

    That was fun, right? We just built a static blog with Sapper and Strapi and deployed it to Netlify in less than 15 minutes? Besides the stellar developer experience, Strapi and Sapper are such a delight to work with. They bring a fresh perspective to building for the web, and this tutorial is a testament to that. We definitely aren’t limited to static websites, and I can’t wait to see what you all build after this. Share your projects with me on Twitter. I can’t wait to see them. Take care, till next time!

    Resources

    Smashing Editorial
    (ks, ra, al, yk, il)

    Source link

    web design

    Creating A Static Blog With Sapper And Strapi — Smashing Magazine

    08/05/2020

    About The Author

    Daniel Madalitso Phiri is a Developer, Writer, Builder of Wacky things, DJ, Lorde superfan and Community Builder from Lusaka, Zambia.
    More about
    Daniel

    This article will take you through how to build a Svelte-powered static blog with Sapper and Strapi, as well as how to deploy the website to Netlify. You’ll understand how to build a static website, as well as use the power of a headless CMS, with a real-world example.

    In this tutorial, we will build a statically generated minimal blog with Sapper, a Svelte-based progressive JavaScript framework, for our front end, and then use Strapi, an open-source headless content management system (CMS), for the back end of our application. This tutorial is aimed at intermediate front-end developers, specifically those who want the versatility of a headless CMS, like Strapi, as well as the minimal structure of a JavaScript framework, like Sapper. Feel free to try out the demo or check out the source code on GitHub.

    To go through the article smoothy, you will need the LTS version of Node.js and either Yarn or npm installed on your device beforehand. It’s also worth mentioning that you will need to have a basic understanding of JavaScript and GraphQL queries.

    Before getting started, let’s get some definitions out of the way. A static-site generator is a tool that generates static websites, and a static website can be defined as a website that is sourced from purely static HTML files. For an overview of your options for static-site generators today, check out “Top 10 Static Site Generators in 2020”.

    A headless CMS, on the other hand, is a CMS accessible via an API. Unlike the traditional CMS’ of the past, a headless CMS is front-end agnostic and doesn’t tie you to a single programming language or platform. Strapi’s article “Why Frontend Developers Should Use a Headless CMS” is good resource to understand the usefulness of a headless CMS.

    Static-site generators, like headless CMS’, are quickly gaining mainstream appeal in the front-end web development community. Both pieces of technology bring with them a much lower barrier to entry, flexibility, and a generally better developer experience. We’ll see all this and more as we build our blog.

    You might be wondering, “Why should I use this instead of the alternatives?” Sapper is based on Svelte, which is known for its speed and relatively small bundle size. In a world where performance plays a huge role in determining an effective user experience, we want to optimize for that. Developers today are spoiled for choice when it comes to front-end frameworks — if we want to optimize for speed, performance, and developer experience (like I do in this project), then Sapper is a solid choice!

    So, let’s get started building our minimal blog, starting with our Sapper front end.

    Sapper Front End

    Our front end is built with Sapper, a framework for building extremely high-performance web apps using Svelte. Sapper, which is short for “Svelte app maker”, enables developers to export pages as a static website, which we will be doing today. Svelte has a very opinionated way of scaffolding projects, using Degit.

    “Degit makes copies of Git repositories and fetches the latest commit in the repository. This is a more efficient approach than using git clone, because we’re not downloading the entire Git history.”

    First, install Degit by running npm install -g degit in your command-line interface (CLI).

    Next up, run the following commands in the CLI to set up our project.

    npx degit "sveltejs/sapper-template#rollup" frontend
    # or: npx degit "sveltejs/sapper-template#webpack" frontend
    cd frontend
    npm install
    npm run dev
    

    Note: We have the option of using either Rollup or Webpack to bundle our project. For this tutorial, we will be using Rollup.

    These commands scaffold a new project in the frontend directory, install its dependencies, and start a server on localhost.

    If you’re new to Sapper, the directory structure will need some explaining.

    Sapper’s App Structure

    If you look in the project directory, you’ll see this:

    ├ package.json
    ├ src
    │ ├ routes
    │ │ ├ # your routes here
    │ │ ├ _error.svelte
    │ │ └ index.svelte
    │ ├ client.js
    │ ├ server.js
    │ ├ service-worker.js
    │ └ template.html
    ├ static
    │ ├ # your files here
    └ rollup.config.js / webpack.config.js
    

    Note: When you first run Sapper, it will create an additional __sapper__ directory containing generated files. You’ll also notice a few extra files and a cypress directory — we don’t need to worry about those for this article.

    You will see a few files and folders. Besides those already mentioned above, these are some you can expect:

    • package.json
      This file contains your app’s dependencies and defines a number of scripts.
    • src
      This contains the three entry points for your app: src/client.js, src/server.js, and (optionally) src/service-worker.js, along with a src/template.html file.
    • src/routes
      This is the meat of the app (that is, the pages and server routes).
    • static
      This is a place to put any files that your app uses: fonts, images, and so on. For example, static/favicon.png will be served as /favicon.png.
    • rollup.config.js
      We’re using Rollup to bundle our app. You probably won’t need to change its configuration, but if you want to, this is where you would do it.

    The directory structure is pretty minimal for the functionality that the project provides. Now that we have an idea of what our project directory looks like and what each file and folder does, we can run our application with npm run dev.

    You should see the Svelte-eque starter home page of our blog.

    A screenshot of the Sapper Starter webpage.
    Your Sapper home page. (Large preview)

    This looks really good! Now that our front end is set up and working, we can move on to the back end of the application, where we will set up Strapi.

    Strapi Back End

    Strapi is both headless and self-hosted, which means we have control over our content and where it’s hosted — no server, language, or vendor lock-in to worry about, and we can keep our content private. Strapi is built with JavaScript and has a content editor built with React. We’ll use this content editor to create some content models and store actual content that we can query later on. But before we can do all of this, we have to set it up by following the instructions below.

    1. Install Strapi and Create New Project

    • Open your CLI.
    • Run yarn create strapi-app backend --quickstart. This will create a new folder named backend and build the React admin UI.

    2. Create Administrator

    A screenshot of the Strapi register screen.
    Create an admin account. (Large preview)

    3. Create Blog Collection Type

    • Navigate to “Content-Types Builder”, under “Plugins” in the left-hand menu.
    • Click the “+ Create new collection type” link.
    • Name it “blog”.
    • Click “Continue”.
    A screenshot of the Strapi dashboard - creating a new collection type
    Create a new collection type. (Large preview)
    • Add a “Text field” (short text), and name it “Title”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - creating a new text field
    Create a new Text field. (Large preview)
    • Add a “Text field” (long text), and name it “Description”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - creating a new text field
    Create a new Text field. (Large preview)
    • Add a “Date field” of the type “date”, and name it “Published”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - creating a new date field
    Create a new Date field. (Large preview)
    • Add a “Rich Text field”, and name it “Body”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - creating a new rich text field
    Create a new Rich Text field. (Large preview)
    • Add another “Text field” (short text), and name it “Slug”.
    • Click the “+ Add another field” button.
    A screenshot of the Strapi dashboard - adding a new text field
    Create a new Text field. (Large preview)
    • Add a “Relation field”.
    • On the right side of the relation, click on the arrow and select “User”.
    • On the left side of the relation, change the field name to “author”.
    A screenshot of the Strapi dashboard - creating a new relation
    Create a new Relation field. (Large preview)
    • Click the “Finish” button.
    • Click the “Save” button, and wait for Strapi to restart.

    When it’s finished, your collection type should look like this:

    A screenshot of the Blog collection type showing all its fields
    Overview of your Blog collection type. (Large preview)

    4. Add a New User to “Users” Collection Type

    • Navigate to “Users” under “Collection Types” in the left-hand menu.
    • Click “Add new user”.
    • Enter your desired “Email”, “Username”, and “Password”, and toggle the “Confirmed” button.
    • Click “Save”.
    A screenshot of the User collection type with the 'add new user' button highlighted
    Add some user content. (Large preview)

    Now we have a new user who we can attribute articles to when adding articles to our “Blog” collection type.

    5. Add Content to “Blogs” Collection Type

    • Navigate to “Blogs” under “Collection Types” in the left-hand menu.
    • Click “Add new blog”.
    • Fill in the information in the fields specified (you have the option to select the user whom you just created as an author).
    • Click “Save”.
    A screenshot of the Blog collection type with the 'add new blog' button highlighted
    Add some blog content. (Large preview)

    6. Set Roles and Permissions

    • Navigate to “Roles and Permissions” under “Plugins” in the left-hand menu.
    • Click the “Public” role.
    • Scroll down under “Permissions”, and find “Blogs”.
    • Tick the boxes next to “find” and “findone”.
    • Click “Save”.
    A screenshot of the Strapi Permissions page with the find and findone actions highlighted
    Set permissions for your Public role. (Large preview)

    7. Send Requests to the Collection Types API

    Navigate to https://localhost:1337/Blogs to query your data.

    You should get back some JSON data containing the content that we just added. For this tutorial, however, we will be using Strapi’s GraphQL API.

    To enable it:

    • Open your CLI.
    • Run cd backend to navigate to ./backend.
    • Run yarn strapi install graphql to install the GraphQL plugin.

    Alternatively, you can do this:

    • In the admin UI, navigate to “Marketplace” under “General” in the left-hand menu.
    • Click “Download” on the GraphQL card.
    • Wait for Strapi to restart.
    A screenshot of the Strapi Marketplace with the download button on the GraphQL plugin highlighted
    Download the GraphQL plugin. (Large preview)

    When the GraphQL plugin is installed and Strapi is back up and running, we can test queries in the GraphQL playground.

    That is all for our back-end setup. All that’s left for us to do is consume the GraphQL API and render all of this beautiful content.

    Piecing Together Both Ends

    We’ve just queried our Strapi back end and gotten back some data. All we have to do now is set up our front end to render the content that we get from Strapi via the GraphQL API. Because we are using the Strapi GraphQL, we will have to install the Svelte Apollo client and a few other packages to make sure everything works properly.

    Installing Packages

    • Open the CLI, and navigate to ./frontend.
    • Run npm i --save apollo-boost graphql svelte-apollo moment.

    Moment.js helps us to parse, validate, manipulate, and display dates and times in JavaScript.

    The packages are now installed, which means we are able to make GraphQL queries in our Svelte app. The blog we’re building will have three pages: “home”, “about” and “articles”. All of our blog posts from Strapi will be displayed on the “articles” page, giving users access to each article. If we think about how that would look, our “articles” page’s route will be /articles, and then each article’s route will be /articles/:slug, where slug is what we enter in the “Slug” field when adding the content in the admin UI.

    This is important to understand because we will tailor our Svelte app to work in the same way.

    In./frontend/src/routes, you will notice a folder named “blog”. We don’t need this folder in this tutorial, so you can delete it. Doing so will break the app, but don’t worry: It’ll be back up and running once we make our “articles” page, which we’ll do now.

    • Navigate to./frontend/src/routes.
    • Create a folder named “articles”.
    • In./frontend/src/routes/articles, create a file named index.svelte, and paste the following code in it.
    • When pasting, be sure to replace <Your Strapi GraphQL Endpoint> with your actual Strapi GraphQL endpoint. For your local version, this will usually be https://localhost:1337/graphql.
    <script context="module">
            import ApolloClient, { gql } from 'apollo-boost';  
            import moment from 'moment';
    
            const blogQuery = gql`
            query Blogs {  
                    blogs {
                            id
                            Title
                            Description
                            Published
                            Body
                            author {
                                    username
                            }
                            Slug
                    }
            }
            `;
            export async function preload({params, query}) {
                    const client = new ApolloClient({ 
                            uri: '<Your Strapi GraphQL Endpoint>',
                            fetch: this.fetch
                             });
                    const results = await client.query({
                            query: blogQuery
                    })
                    return {posts: results.data.blogs}
            }
    </script>
    
    <script>
            export let posts;
    </script>
    
    <style>
            ul, p {
                    margin: 0 0 1em 0;
                    line-height: 1.5;
            }
            .main-title {
                    font-size: 25px;
            }
    </style>
    
    <svelte:head>
            <title>articles</title>
    </svelte:head>
    
    <h1>recent posts</h1>
    
    <ul>
    {#each posts as post}
        <li>
              <a class="main-title" rel='prefetch' href='articles/{post.Slug}'>
                {post.Title}
              </a>
        </li>
        <p> 
      {moment().to(post.Published, "DD-MM-YYYY")} ago by {post.author.username} 
        </p>
    {/each}
    </ul>
    

    This file represents our /articles route. In the code above, we’ve imported a few packages and then used Apollo Client to make a query: blogQuery. We then stored our query response in a variable, results, and used the preload() function to process the data needed on our page. The function then returns posts, a variable with the parsed query result.

    We’ve used Svelte’s #each block to loop through the data from Strapi, displaying the title, date of publication, and author. Our <a> tag, when clicked, goes to a page defined by the slug that we entered for our post in Strapi’s admin UI. This means that when the link is clicked, we open up a page for a particular article, and the slug is used to identify that article.

    For our /articles/:slug route, create a file named [slug].svelte, in ./src/routes/articles, and paste the following code:

    <script context="module">
            import ApolloClient, { gql } from 'apollo-boost';  
            import moment from 'moment';
    
            const blogQuery = gql`
            query Blogs($Slug: String!) {
                    blogs: blogs(where: { Slug: $Slug }) {
                            id
                            Title
                            Description
                            Published
                            Body
                            author {
                                    username
                            }
                            Slug
                    }
                    }
            `;
            export async function preload({params, query}) {
                    const client = new ApolloClient({ 
                            uri: '<Your Strapi GraphQL Endpoint>',
                            fetch: this.fetch
                             });
                    const results = await client.query({
                            query: blogQuery,
                            variables: {"Slug" : params.slug} 
                    })
                    return {post: results.data.blogs}
            }
    </script>
    
    <script>
            export let post;
    </script>
    
    <style>
            .content :global(h2) {
                    font-size: 1.4em;
                    font-weight: 500;
            }
            .content :global(pre) {
                    background-color: #f9f9f9;
                    box-shadow: inset 1px 1px 5px rgba(0,0,0,0.05);
                    padding: 0.5em;
                    border-radius: 2px;
                    overflow-x: auto;
            }
            .content :global(pre) :global(code) {
                    background-color: transparent;
                    padding: 0;
            }
            .content :global(ul) {
                    line-height: 1.5;
            }
            .content :global(li) {
                    margin: 0 0 0.5em 0;
            }
    </style>
    
    <svelte:head>
            <title>an amazing article</title>
    </svelte:head>
    
    {#each post as post}
                    <h2>{post.Title}</h2>
                    <h3>{moment().to(post.Published)} by {post.author.username}</h3>
    
                    <div class='content'>
                    {@html post.Body} </div>
    
    {/each}
    
    <p>⇺<a href="articles"> back to articles</a></p>
    

    Note: In Svelte, dynamic parameters are encoded using [brackets]. Our [slug].svelte file lets us add routes for different posts dynamically.

    Just like in routes/articles/index.svelte, here we’ve imported a few packages, and then used Apollo Client to make a query: blogQuery. This query is different because we’re filtering our data to make sure it returns a specific blog post. The params argument in our preload() function lets us access params.slug, which is the slug of the current page (that is, the slug of this particular blog post). We used params.slug as a variable in our GraphQL query so that only the data with a slug matching the slug of our web page is returned. We then stored our query response in a variable (results), and our preload() function returns posts, a variable with the parsed query result.

    Finally, we displayed our post’s title, publication date, and body (wrapped in Svelte’s {@html} tag).

    That’s it. We can now dynamically display pages for any posts added to Strapi’s back end.

    We can now work on the “about” and “home” pages. In ./frontend/src/routes, paste this code in the about.svelte file:

    <svelte:head>
            <title>about</title>
    </svelte:head>
    
    <h1>about this site</h1>
    
    <p>
    minimalist web design really let's the content stand out and shine. 
    this is why a simple website design is the first choice of so many artists, photographers, 
    and even some writers. they want their creative content to be the center of attention, 
    rather than design elements created by someone else. 
    </p>
    
    <p>this minimal blog is built with <a href="https://svelte.dev/">svelte</a> and <a href="https://strapi.io/">strapi</a> 
    images by <a href="https://unsplash.com/@glencarrie">glen carrie</a> from unsplash 
    </p>
    

    For our home page, let’s go to ./frontend/src/routes and paste the following code in index.svelte:

    <style>
            h1, figure, p {
                    text-align: center;
                    margin: 0 auto;
            }
            h1 {
                    font-size: 2.8em;
                    font-weight: 400;
                    margin: 0 0 0.5em 0;
            }
            figure {
                    margin: 0 0 1em 0;
            }
            img {
                    width: 100%;
                    max-width: 400px;
                    margin: 0 0 1em 0;
            }
            p {
                    margin: 1em auto;
                    padding-bottom: 1em;
            }
            @media (min-width: 480px) {
                    h1 {
                            font-size: 4em;
                    }
            }
    </style>
    
    <svelte:head>
            <title>a minimal sapper blog</title>
    </svelte:head>
    <p>welcome to</p>
    <h1>the<b>blog.</b></h1>
    
    <figure>
            <img alt='the birds on a line' src="http://www.smashingmagazine.com/bird-bg.png">
            <figcaption>where less is more</figcaption>
    </figure>
    
    <p>
    <strong>
    we're minimal and that might seem boring, except you're actually paying attention.
    </strong>
    </p>
    <p class="link"><a href="about">find out why</a>...</p>
    

    We’ve created all the pages needed in order for our app to run as expected. If you run the app now, you should see something like this:

    A screenshot of the minimal blog home page
    Your finished minimal blog home page. (Large preview)

    Pretty sweet, yeah?

    Locally, everything works great, but we want to deploy our static blog to the web and share our beautiful creation. Let’s do that.

    Deploy To Netlify

    We’re going to deploy our application to Netlify, but before we can do that, log into your Netlify account (or create an account, if you don’t already have one). Sapper gives us the option to deploy a static version of our website, and we’ll do just that.

    • Navigate to ./frontend.
    • Run npm run export to export a static version of the application.

    Your application will be exported to ./frontend/sapper/export.

    Drag your exported folder into Netlify, and your website will be live in an instant.

    The Netlify Dashboard
    Drag your export folder to the Netlify Dashboard. (Large preview)

    Optionally, we can deploy our website from Git by following Netlify’s documentation. Be sure to add npm run export as the build command and __sapper__/export as the base directory.

    We also have the option to deploy to with Vercel (formally ZEIT, as mentioned in Sapper’s documentation).

    Conclusion

    That was fun, right? We just built a static blog with Sapper and Strapi and deployed it to Netlify in less than 15 minutes? Besides the stellar developer experience, Strapi and Sapper are such a delight to work with. They bring a fresh perspective to building for the web, and this tutorial is a testament to that. We definitely aren’t limited to static websites, and I can’t wait to see what you all build after this. Share your projects with me on Twitter. I can’t wait to see them. Take care, till next time!

    Resources

    Smashing Editorial
    (ks, ra, al, yk, il)

    Source link

    web design

    Differences Between Static Generated Sites And Server-Side Rendered Apps — Smashing Magazine

    07/02/2020

    About The Author

    Front-end developer based in Lagos, Nigeria. He enjoys converting designs into code and building things for the web.
    More about
    Timi

    Statically generated sites or prerendering and server-side rendered applications are two modern ways to build front-end applications using JavaScript frameworks. These two modes, yet different, are often mixed up as the same thing and in this tutorial, we’re going to learn about the differences between them.

    JavaScript currently has three types of applications that you can build with: Single Page Applications (SPAs), pre-rendering/static generated sites and server-side rendered applications. SPAs come with many challenges, one of which is Search Engine Optimization (SEO). Possible solutions are to make use of Static Site Generators or Server-Side Rendering (SSR).

    In this article, I’m going to explain them alongside listing their pros and cons so you have a balanced view. We’re going to look at what static generated/pre-rendering is as well as frameworks such as Gatsby and VuePress that help in creating statically generated sites. We’re also going to look at what server-side rendered (SSR) applications are as well as frameworks like Nextjs and Nuxtjs that can help you create SSR applications. Finally, we’re going to cover the differences between these two methods and which of them you should use when building your next application.

    Note: You can find all the code snippets in this article on GitHub.

    What Is A Static Site Generator?

    A Static Site Generator (SSG) is a software application that creates HTML pages from templates or components and a given content source. You give it some text files and content, and the generator will give you back a complete website, and this completed website is referred to as a static generated site. What this means is that your site pages are generated at build time and your site content does not change unless you add new contents or components and “rebuild” or you have to rebuild your site if you want it to be updated with new content.

    Diagram explaining how static site generation works
    How static site generation works (Large preview)

    This approach is good for building applications that the content does not change too often — sites that the content does not have to change depending on the user, and sites that do not have a lot of user-generated content. An example of such a site is a blog or a personal website. Let’s look at some advantages of using static generated sites.

    PROS

    • Fast website: Since all of your site’s pages and content have been generated at build time, you do not have to worry about API calls to the server for content and this makes your site very fast.
    • Easy to deploy: After your static site has been generated, you would be left with static files, and hence, it can be easily deployed to platforms like Netlify.
    • Security: Static generated site are solely composed of static files, the risk of being vulnerable to cyber attacks is minimal. This is because static generated sites have no database, attackers cannot inject malicious code or exploit your database.
    • You can use version control software (e.g git) to manage and track changes to your content. This can come in handy when you want to roll back changes you made to the content on your site.

    CONS

    • Content can become stale if it changes too quickly.
    • To update its content, you have to rebuild the site.
    • Build time would increase depending on the size of the application.

    Examples of static site generators are GatsbyJS and VuePress. Let us take a look at how to create static sites using these two generators.

    Gatsby

    According to their official website,

    “Gatsby is a free and open-source framework based on React that helps developers build blazing-fast websites and apps.”

    This means developers familiar with React would find it easy to get started with Gatsby.

    To use this generator, you first have to install it using NPM:

    npm install -g gatsby-cli
    

    This will install Gatsby globally on your machine, you only have to run this command once on your machine. After this installation is complete, you can create your first static site generator using the following command.

    gatsby new demo-gatsby
    

    This command will create a new Gatsby project that I have named demo-gatsby. When this is done, you can start up your app server by running the following command:

    cd demo-gatsby
    gatsby develop
    

    Your Gatsby application should be running on localhost:8000.

    Gatsby default landing page
    Gatsby default starter page (Large preview)

    The folder structure for this app looks like this;

    --| gatsby-browser.js  
    --| LICENSE        
    --| README.md
    --| gatsby-config.js
    --| node_modules/  
    --| src/
    ----| components
    ----| pages
    ----| images
    --| gatsby-node.js     
    --| package.json   
    --| yarn.lock
    --| gatsby-ssr.js      
    --| public/
    ----| icons
    ----| page-data
    ----| static
    

    For this tutorial, we’re only going to look at the src/pages folder. This folder contains files that would be generated into routes on your site.

    To test this, let us add a new file (newPage.js) to this folder:

    import React from "react"
    import { Link } from "gatsby"
    import Layout from "../components/layout"
    import SEO from "../components/seo"
    const NewPage = () => (
      <Layout>
        <SEO title="My New Page" />
        <h1>Hello Gatsby</h1>
        <p>This is my first Gatsby Page</p>
        <button>
          <Link to='/'>Home</Link>
        </button>
      </Layout>
    )
    export default NewPage
    

    Here, we import React from the react package so when your code is transpiled to pure JavaScript, references to React will appear there. We also import a Link component from gatsby and this is one of React’s route tag that is used in place of the native anchor tag ( <a href='#'>Link</a>). It accepts a to prop that takes a route as a value.

    We import a Layout component that was added to your app by default. This component handles the layout of pages nested inside it. We also import the SEO component into this new file. This component accepts a title prop and configures this value as part of your page’s metadata. Finally, we export the function NewPage that returns a JSX containing your new page’s content.

    And in your index.js file, add a link to this new page we just created:

    import React from "react"
    import { Link } from "gatsby"
    import Layout from "../components/layout"
    import Image from "../components/image"
    import SEO from "../components/seo"
    const IndexPage = () => (
      <Layout>
        <SEO title="Home" />
        <h1>Hi people</h1>
        <p>Welcome to your new Gatsby site.</p>
        <p>Now go build something great.</p>
        <div style={{ maxWidth: `300px`, marginBottom: `1.45rem` }}>
          <Image />
        </div>
        <Link to="/page-2/">Go to page 2</Link>
        {/* new link */}
        <button>
          <Link to="/newPage/">Go to New Page</Link>
        </button>
      </Layout>
    )
    export default IndexPage
    

    Here, we import the same components that were used in newPage.js file and they perform the same function in this file. We also import an Image component from our components folder. This component is added by default to your Gatsby application and it helps in lazy loading images and serving reduced file size. Finally, we export a function IndexPage that returns JSX containing our new link and some default content.

    Now, if we open our browser, we should see our new link at the bottom of the page.

    Gatsby default landing page with link to a new page
    Gatsby landing page with new link (Large preview)

    And if you click on Go To New Page, it should take you to your newly added page.

    New page containing some texts
    New gatsby page (Large preview)

    VuePress

    VuePress is a static site generator that is powered by Vue, Vue Router and Webpack. It requires little to no configuration for you to get started with it. While there are a number of tools that are static site generators, VuePress stands out from amongst the pack for a single reason: its primary directive is to make it easier for developers to create and maintain great documentation for their projects.

    To use VuePress, you first have to install it:

    //globally
    yarn global add vuepress # OR npm install -g vuepress
    
    //in an existing project
    yarn add -D vuepress # OR npm install -D vuepress
    

    Once the installation process is done, you can run the following command in your terminal:

    # create the project folder
    mkdir demo-vuepress && cd demo-vuepress
    
    # create a markdown file
    echo '# Hello VuePress' > README.md
    
    # start writing
    vuepress dev
    

    Here, we create a folder for our VuePress application, add a README.md file with # Hello VuePress as the only content inside this file, and finally, start up our server.

    When this is done, our application should be running on localhost:8080 and we should see this in our browser:

    A VuePress webpage with a text saying ‘Hello VuePress’
    VuePress landing page (Large preview)

    VuePress supports VueJS syntax and markup inside this file. Update your README.md file with the following:

    # Hello VuePress
    _VuePress Rocks_
    > **Yes!**
    _It supports JavaScript interpolation code_
    > **{{new Date()}}**
    <p v-for="i of ['v','u', 'e', 'p', 'r', 'e', 's', 's']">{{i}}</p>
    

    If you go back to your browser, your page should look like this:

    Updated VuePress page
    Updated Vuepress page (Large preview)

    To add a new page to your VuePress site, you add a new markdown file to the root directory and name it whatever you want the route to be. In this case, I’ve gone ahead to name it Page-2.md and added the following to the file:

    # hello World
    Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
    tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
    quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo
    consequat.
    

    And now, if you navigate to /page-2 in your browser, we should see this:

    A VuePress webpage containing hello world
    A “Hello World” page in VuePress (Large preview)

    What Is Server-Side Rendering? (SSR)

    Server-Side Rendering (SSR), is the process of displaying web-pages on the server and passing it to the browser/client-side instead of rendering it in the browser. Server-side sends a fully rendered page to the client; the client’s JavaScript bundle takes over and allows the SPA framework to operate.

    This means if you have an application that is server-side rendered, your content is fetched on the server side and passed to your browser to display to your user. With client-side rendering it is different, you would have to navigate to that page first before it fetches data from your server meaning your user would have to wait for some seconds before they’re served with the content on that page. Applications that have SSR enabled are called Server-side rendered applications.

    A diagram explaining how server-side rendering works
    How SSR works (Large preview)

    This approach is good for building complex applications that require user interaction, rely on a database, or where the content changes very often. This is because content on these sites changes very often and the users need to see the updated content as soon as they’re updated. It is also good for applications that have tailored content depending on who is viewing it and applications where you need to store user-specific data like email and user preference while also catering for SEO. An example of this is a large e-commerce platform or a social media site. Let us look at some of the advantages of server-side rendering your applications.

    Pros

    • Content is up to date because it fetches content on the go;
    • Your site loads fast because it fetches its content on the server-side before rendering it to the user;
    • Since in SSR JavaScript is rendered server-side, your users’ devices have little relevance to the load time of your page and this leads to better performance.

    CONS

    • More API calls to the server since they’re made per request;
    • Cannot deploy to a static CDN.

    Further examples of frameworks that offer SSR are Next.js and Nuxt.js.

    Next.js

    Next.js is a React.js framework that helps in building static sites, server-side rendered applications, and so on. Since it was built on React, knowledge of React is required to use this framework.

    To create a Next.js app, you need to run the following:

    npm init next-app
    # or
    yarn create next-app
    

    You would be prompted to choose a name your application, I have named my application demo-next. The next option would be to select a template and I’ve selected the Default starter app after which it begins to set up your app. When this is done, we can now start our application

    cd demo-next
    yarn dev 
    # or npm run dev
    

    Your application should be running on localhost:3000 and you should see this in your browser;

    Default Nextjs landing page
    Next.js landing page (Large preview)

    The page that is being rendered can be found in pages/index.js so if you open this file and modify the JSX inside the Home function, it would reflect in your browser. Replace the JSX with this:

    import Head from 'next/head'
    export default function Home() {
      return (
        <div className="container">
          <Head>
            <title>Hello Next.js</title>
            <link rel="icon" href="http://www.smashingmagazine.com/favicon.ico" />
          </Head>
          <main>
            <h1 className="title">
              Welcome to <a href="https://nextjs.org">Next.js!</a>
            </h1>
            <p className='description'>Nextjs Rocks!</p>
          </main>
          <style jsx>{`
            main {
              padding: 5rem 0;
              flex: 1;
              display: flex;
              flex-direction: column;
              justify-content: center;
              align-items: center;
            }
            .title a {
              color: #0070f3;
              text-decoration: none;
            }
            .title a:hover,
            .title a:focus,
            .title a:active {
              text-decoration: underline;
            }
            .title {
              margin: 0;
              line-height: 1.15;
              font-size: 4rem;
            }
            .title,
            .description {
              text-align: center;
            }
            .description {
              line-height: 1.5;
              font-size: 1.5rem;
            }
          `}</style>
          <style jsx global>{`
            html,
            body {
              padding: 0;
              margin: 0;
              font-family: -apple-system, BlinkMacSystemFont, Segoe UI, Roboto,
                Oxygen, Ubuntu, Cantarell, Fira Sans, Droid Sans, Helvetica Neue,
                sans-serif;
            }
            * {
              box-sizing: border-box;
            }
          `}</style>
        </div>
      )
    }
    

    In this file, we make use of Next.js Head component to set our page’s metadata title and favicon for this page. We also export a Home function that returns a JSX containing our page’s content. This JSX contains our Head component together with our main page’s content. It also contains two style tags, one for styling this page and the other for the global styling of the app.

    Now, you should see that the content on your app has changed to this:

    Nextjs landing page containing ‘welcome to Nextjs’ text
    Updated landing page (Large preview)

    Now if we want to add a new page to our app, we have to add a new file inside the /pages folder. Routes are automatically created based on the /pages folder structure, this means that if you have a folder structure that looks like this:

    --| pages
    ----| index.js ==> '/'
    ----| about.js ==> '/about'
    ----| projects
    ------| next.js ==> '/projects/next'
    

    So in your pages folder, add a new file and name it hello.js then add the following to it:

    import Head from 'next/head'
    export default function Hello() {
      return (
        <div>
           <Head>
            <title>Hello World</title>
            <link rel="icon" href="http://www.smashingmagazine.com/favicon.ico" />
          </Head>
          <main className='container'>
            <h1 className='title'>
             Hello <a href="https://en.wikipedia.org/wiki/Hello_World_(film)">World</a>
            </h1>
            <p className='subtitle'>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Voluptatem provident soluta, sit explicabo impedit nobis accusantium? Nihil beatae, accusamus modi assumenda, optio omnis aliquid nobis magnam facilis ipsam eum saepe!</p>
          </main>
          <style jsx> {`
          
          .container {
            margin: 0 auto;
            min-height: 100vh;
            max-width: 800px;
            text-align: center;
          }
          .title {
            font-family: "Quicksand", "Source Sans Pro", -apple-system, BlinkMacSystemFont,
              "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif;
            display: block;
            font-weight: 300;
            font-size: 100px;
            color: #35495e;
            letter-spacing: 1px;
          }
          .subtitle {
            font-weight: 300;
            font-size: 22px;
            color: #526488;
            word-spacing: 5px;
            padding-bottom: 15px;
          }
          `} </style>
        </div>
      )
    }
    

    This page is identical to the landing page we already have, we only changed the content and added new styling to the JSX. Now if we visit localhost:3000/hello, we should see our new page:

    A Nextjs webpage containing ‘Hello world’
    A “Hello World ” page in Next.js (Large preview)

    Finally, we need to add a link to this new page on our index.js page, and to do this, we make use of Next’s Link component. To do that, we have to import it first.

    # index.js
    import Link from 'next/link'
    
    #Add this to your JSX
    <Link href='/hello'>
    <Link href='/hello'>
      <a>Next</a>
    </Link>
    

    This link component is how we add links to pages created in Next in our application.

    Now if we go back to our homepage and click on this link, it would take us to our /hello page.

    Nuxt.js

    According to their official documentation:

    “Nuxt is a progressive framework based on Vue.js to create modern web applications. It is based on Vue.js official libraries (vue, vue-router and vuex) and powerful development tools (webpack, Babel and PostCSS). Nuxt’s goal is to make web development powerful and performant with a great developer experience in mind.”

    It is based on Vue.js so that means Vue.js developers would find it easy getting started with it and knowledge of Vue.js is required to use this framework.

    To create a Nuxt.js app, you need to run the following command in your terminal:

    yarn create nuxt-app <project-name>
    # or npx
    npx create-nuxt-app <project-name>
    

    This would prompt you to select a name along with some other options. I named mine demo-nuxt and selected default options for the other options. When this is done, you can open your app folder and open pages/index.vue. Every file in this folder file is turned into a route and so our landing page is controlled by index.vue file. So if you update it with the following:

    <template>
      <div class="container">
        <div>
          <logo />
          <h1 class="title">
            Hello Nuxt
          </h1>
          <h2 class="subtitle">
            Nuxt.js ROcks!
          </h2>
          <div class="links">
            <a
              href="https://nuxtjs.org/"
              target="_blank"
              class="button--green"
            >
              Documentation
            </a>
            <a
              href="https://github.com/nuxt/nuxt.js"
              target="_blank"
              class="button--grey"
            >
              GitHub
            </a>
          </div>
        </div>
      </div>
    </template>
    <script>
    import Logo from '~/components/Logo.vue'
    export default {
      components: {
        Logo
      }
    }
    </script>
    <style>
    .container {
      margin: 0 auto;
      min-height: 100vh;
      display: flex;
      justify-content: center;
      align-items: center;
      text-align: center;
    }
    .title {
      font-family: 'Quicksand', 'Source Sans Pro', -apple-system, BlinkMacSystemFont,
        'Segoe UI', Roboto, 'Helvetica Neue', Arial, sans-serif;
      display: block;
      font-weight: 300;
      font-size: 100px;
      color: #35495e;
      letter-spacing: 1px;
    }
    .subtitle {
      font-weight: 300;
      font-size: 42px;
      color: #526488;
      word-spacing: 5px;
      padding-bottom: 15px;
    }
    .links {
      padding-top: 15px;
    }
    </style>
    

    And run your application:

    cd demo-nuxt
    # start your applicatio
    yarn dev # or npm run dev
    

    Your application should be running on localhost:3000 and you should see this:

    Default Nuxtjs landing page
    Nuxt.js landing page (Large preview)

    We can see that this page displays the content we added in to index.vue. The router structure works the same way Next.js router works; it renders every file inside /pages folder into a page. So let us add a new page (hello.vue) to our application.

    <template>
      <div>
        <h1>Hello World!</h1>
        <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit. Id ipsa vitae tempora perferendis, voluptate a accusantium itaque vel ex, provident autem quod rem saepe ullam hic explicabo voluptas, libero distinctio?</p>
      </div>
    </template>
    <script>
    export default {};
    </script>
    <style>
    </style>
    

    So if you open localhost:3000/hello, you should see your new page in your browser.

    A Nuxtjs webpage containing ‘Hello World’
    “Hello World” page in Nuxtjs (Large preview)

    Taking A Closer Look At The Differences

    Now that we have looked at both static-site generators and server-side rendering and how to get started with them by using some popular tools, let us look at the differences between them.

    Static Sites Generators Server-Side Rendering
    Can easily be deployed to a static CDN Cannot be deployed to a static CDN
    Content and pages are generated at build time Content and pages are generated per request
    Content can become stale quickly Content is always up to date
    Fewer API calls since it only makes it at build time Makes API calls each time a new page is visited

    Conclusion

    We can see why it is so easy to think both static generated sites and server-side rendered applications are the same. Now that we know the differences between them are, I would advise that we try to learn more on how to build both static generated sites and server-side rendered applications in order to fully understand the differences between them.

    Further Resources

    Here are some useful links that are bound to help you get started in no time:

    Smashing Editorial
    (ks, ra, il)

    Source link

    web design

    Wrangling Static Assets And Media Files (Part 4) — Smashing Magazine

    06/25/2020

    About The Author

    Philip Kiely writes code and words. He is the author of Writing for Software Developers (2020). Philip holds a B.A. with honors in Computer Science from …
    More about
    Philip
    Kiely

    Front-end developers and designers create amazing static assets for web applications. Today, we’re focusing on what happens after the style hotfix or beautiful graphic you just finished is pushed to master. We’ll also investigate handling files that users upload, called media files. Together, we’ll develop an intuition for the strategies available to Django developers for serving these files to users worldwide in a secure, performant, and cost-effective manner.

    Django websites involve a lot of files. It’s not just source code for the configuration, models, views, and templates, but also static assets: CSS and JavaScript, images, icons. As if that wasn’t enough already, sometimes users come along and want to upload their own files to your website. It’s enough to make any developer incredulous. Files everywhere!

    Here’s where I wish I could say (without caveats): “Don’t worry, Django has your back!” But unfortunately, when dealing with static assets and media files, there are a lot of caveats to deal with.

    Today, we’ll address storing and serving files for both single-server and scalable deployments while considering factors like compression, caching, and availability. We’ll also discuss the costs and benefits of CDNs and dedicated file storage solutions.

    Note: This is not a tutorial on how to deploy a Django site to any specific platform. Instead, like the other articles in the Django Highlights series (see below), it’s intended as a guide for front-end developers and designers to understand other parts of the process of creating a web application. Today, we’re focusing on what happens after the style hotfix or beautiful graphic you just finished is pushed to master. Together, we’ll develop an intuition for the strategies available to Django developers for serving these files to users worldwide in a secure, performant, and cost-effective manner.

    Previous Parts In The Series:

    • Part 1: User Models And Authentication
    • Part 2: Templating Saves Lines
    • Part 3: Models, Admin, And Harnessing The Relational Database

    Definitions

    Most of these terms are pretty straightforward, but it’s worth taking a moment to establish a shared vocabulary for this discussion.

    The three types of files in a live Django application are:

    1. Source Code
      The Python and HTML files that are created with the Django framework. These files are the core of the application. Source code files are generally pretty small, measured in kilobytes.
    2. Static Files
      Also called “static assets,” these files include CSS and JavaScript, both written by the application developer and third-party libraries, as well as PDFs, software installers, images, music, videos, and icons. These files are only used client-side. Static files range from a few kilobytes of CSS to gigabytes of video.
    3. Media Files
      Any file uploaded by a user, from profile pictures to personal documents, is called a media file. These files need to be securely and reliably stored and retrieved for the user. Media files can be of any size, the user might upload a couple of kilobytes of plaintext to a few gigabytes of video. If you’re on the latter end of this scale, you probably need more specialized advice than this article is prepared to give.

    The two types of Django deployments are:

    1. Single-Server
      A single-server Django deployment is exactly what it sounds like: everything lives on a single server. This strategy is very simple and closely resembles the development environment, but cannot handle large or inconsistent amounts of traffic effectively. The single-server approach is only applicable for learning or demonstration projects, not real-word applications that require reliable uptime.
    2. Scalable
      There are lots of different ways to deploy a Django project that allows it to scale to meet user demand. These strategies often involve spinning up and down numerous servers and using tools like load balancers and managed databases. Fortunately, we can effectively lump everything more complex than a single-server deployment into this category for the purposes of this article.

    Option 1: Default Django

    Small projects benefit from simple architecture. Django’s default handling of static assets and media files is just that: simple. For each, you have a root folder that stores the files and lives right next to the source code on the server. Simple. These root folders are generated and managed mostly through the yourproject/settings.py configuration.

    Static Assets

    The most important thing to understand when working with static files in Django is the python manage.py collectstatic command. This command rifles through the static folder of each app in the Django project and copies all static assets to the root folder. Running this command is an important part of deploying a Django project. Consider the following directory structure:

    - project
      - project
        - settings.py
        - urls.py
        - ...
      - app1
        - static/
          - app1
            - style.css
            - script.js
            - img.jpg
        - templates/
        - views.py
        - ...
      - app2
        - static/
          - app2
            - style.css
            - image.png
        - templates/
        - views.py
        - ...

    Also assume the following settings in project/settings.py:

    STATIC_URL = "/static/"
    STATIC_ROOT = "/path/on/server/to/djangoproject/static"

    Running the python manage.py collectstatic command will create the following folder on the server:

    - /path/on/server/to/djangoproject/static
      - app1
        - style.css
        - script.js
        - img.jpg
      - app2
        - style.css
        - image.png

    Notice that within each static folder, there’s another folder with the app’s name. This is to prevent namespacing conflicts after the static files are collected; as you can see in the above file structure this keeps app1/style.css and app2/style.css distinct. From here, the application will look for static files in this structure at the STATIC_ROOT during production. As such, reference static files as follows in a template in app1/templates/:

    {% load static %}
    <link rel="stylesheet" type="text/css" href="{% static "app1/style.css" %}">

    Django automatically figures out where to get the static file from in development to model this behavior, you do not need to run collectstatic during development.

    For more details, see the Django documentation.

    Media Files

    Imagine a professional networking site with a database of users. Each of those users would have an associated profile, which might contain, among other things, an avatar image and a resume document. Here’s a short example model of that information:

    from django.db import models
    from django.contrib.auth.models import User
    
    def avatar_path(instance, filename):
        return "avatar_{}_{}".format(instance.user.id, filename)
    
    class Profile(models.Model):
        user = models.OneToOneField(User, on_delete=models.CASCADE)
        resume = models.FileField(upload_to="path/string")
        avatar = models.ImageField(upload_to=avatar_path)

    For this to work, you need the following options in project/settings.py, like with static assets:

    MEDIA_URL = "/media/"
    MEDIA_ROOT = "/path/on/server/to/media"

    An ImageField inherits from FileField, so it shares the same parameters and capabilities. Both fields have an optional upload_to argument, which takes a string that is a path and appends it to the MEDIA_ROOT to store the file, which is then accessible by the same path on top of MEDIA_URL. The upload_to argument can also take a function that returns a string, as demonstrated with the avatar_path function.

    Make sure to omit the media files directory and its contents from version control. Its contents may conflict when two developers test the same application on different machines, and it is, unlike static assets, not a part of the deployable Django application.

    Option 2: Django With Services

    My guiding philosophy is to use tools for what they’re best at. Django is an amazing framework, and it provides great tooling out of the box for user authentication, server-side rendering, working with models and forms, administrative functions, and dozens of other essential aspects of building web applications. However, its tooling for handling static assets and media files is not, in my opinion, well-suited for production on scalable sites. The Django core developers recognize that many people choose alternate approaches to handling these files in production; the framework is very good at getting out of your way when you do. Most Django sites intended for general use will want to incorporate static assets and handle media files using these non-Django-specific approaches.

    Static Assets On A CDN

    While small-to-medium projects can get away without one, a CDN (content delivery network) is easy to use and improves the performance of applications of any size. A CDN is a network of servers, generally worldwide, that distributes and serves web content, mostly static assets. Popular CDNs include Cloudflare CDN, Amazon CloudFront, and Fastly. To use a CDN, you upload your static files, then in your application reference them as follows:

    <link rel="stylesheet" type="text/css" href="https://cdn.example.com/path/to/your/files/app1/style.css">

    This process is easy to integrate with your Django deployment scripts. After running the python manage.py collectstatic command, copy the generated directory to your CDN (a process that varies substantially based on the service you’re using), then remove the static assets from the Django deployment package.

    In development, you’ll want to access different copies of your static assets than in production. This way, you can make changes locally without affecting the production site. You can either use local assets or run a second instance of the CDN to deliver the files. Configure yourproject/settings.py with some custom variable, like CDN_URL, and use that value in your templates to ensure you’re using the correct version of assets in development and production.

    One final note is that many libraries for CSS and JavaScript have free CDNs that most websites can use. If you’re loading, say, Bootstrap 4 or underscore.js, you can skip the hassle of using your own copy in development and the expense of serving your own copies in production by using these public CDNs.

    Media Files with a Dedicated Filestore

    No production Django site should store user files in a simple /media/ folder somewhere on the server that runs the site. Here are three of the many reasons why not to:

    1. If you need to scale up the site by adding multiple servers, you need some way of copying and syncing the uploaded files across those servers.
    2. If a server crashes, the source code is backed up in your version control system, but media files aren’t backed up by default, unless you configured your server to do so, but for that effort you’d be better off using a dedicated filestore.
    3. In case of malicious activity, it’s somewhat better to keep user-uploaded files on a separate server from the one running the application, although this in no way removes the requirement to validate user-uploaded files.

    Integrating a third party to store your user-uploaded files is really easy. You don’t need to change anything in your code, except maybe removing or modifying the upload_to value of FileFields in your models, and configuring a few settings. For example, if you were planning to store your files in AWS S3, you’d want to do the following, which is very similar to the process of storing files with Google Cloud, Azure, Backblaze, or similar competing services.

    First, you’ll need to install the libraries boto3 and django-storages. Then, you need to set up a bucket and IAM role on AWS, which is outside the scope of this article, but you can see instructions for here. Once all of that is configured, you need to add three variables to your project/settings.py:

    DEFAULT_FILE_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"
    AWS_STORAGE_BUCKET_NAME = "BUCKET_NAME"
    AWS_S3_REGION_NAME = "us-east-2"

    Additionally, you will need to set up credential access to your AWS bucket. Some tutorials will demonstrate adding an ID and secret key to your settings file or as environment variables, but these are insecure practices. Instead, use django-storages with the AWS CLI to configure the keys, as described here. You may also be interested in the django-storages documentation.

    You don’t want development or testing media files to get mixed up with uploads from actual users. Avoiding this is pretty simple: set up multiple buckets, one for development (or one for each developer), one for testing, and one for production. Then, all you need to change is the AWS_STORAGE_BUCKET_NAME setting per environment and you’re good to go.

    Performance And Availability

    There are numerous factors that affect the performance and reliability of your website. Here are some important ones when considering static and media files that matter regardless of which approach you take to managing them.

    Cost

    Serving files to a user costs money for two reasons: storage and bandwidth. You have to pay the hosting provider to store the files for you, but you also have to pay them to serve the files. Bandwidth is substantially more expensive than storage (for example, AWS S3 charges 2.3 cents per gigabyte for storage versus 9 cents per gigabyte of data transfer out to the Internet at the time of writing). The economics of a file store like S3 or a CDN are different than the economics of a generalized host like a Digital Ocean droplet. Take advantage of specialization and economies of scale by moving expensive files to services designed for them. Furthermore, many file stores and CDNs offer free plans so sites that might be small enough to get away without using them can instead do so and reap the benefits without any additional infrastructure costs.

    Compression and Transcoding

    Most of the problems caused by static assets like photos and videos are because they are big files. Naturally, developers address this by trying to make these files smaller. There are a number of ways to do this using a mix of compression and transcoding in two general categories: lossless and lossy. Lossless compression retains the original quality of the assets but provides relatively modest decreases in file size. Lossy compression, or transcoding into a lossy format, allows for much smaller file sizes at the expense of losing some of the quality of the original artifact. An example of this is transcoding video to a lower bitrate. For details, check out this article about optimizing video delivery. When serving large files over the web, bandwidth speeds often demand that you serve highly compressed artifacts, requiring lossy compression.

    Unless you’re YouTube, compression and transcoding doesn’t happen on the fly. Static assets should be appropriately formatted prior to deployment, and you can enforce basic file type and file size restrictions on user uploads to ensure sufficient compression and appropriate formatting in your users’ media files.

    Minification

    While files of JavaScript and CSS aren’t usually as large as images, they can often be compressed to squeeze into fewer bytes. This process is called minification. Minification does not change the encoding of the files, they’re still text, and a minified file still needs to be valid code for its original language. Minified files retain their original extensions.

    The main thing removed in a minified file is unnecessary whitespace, and from the computer’s perspective almost all whitespace in CSS and JavaScript is unnecessary. Minification schemes also shorten variable names and remove comments.

    Minification by default obfuscates code; as a developer, you should work exclusively with non-minified files. Some automatic step during the deployment process should minify the files before they are stored and served. If you’re using a library provided by a third-party CDN, make sure you’re using the minified version of that library if available. HTML files can be minified, but as Django uses server-side rendering, the processing cost of doing so on the fly would most likely outweigh the small decrease in page size.

    Global Availability

    Just like it takes less time to send a letter to your neighbor than it does to send it across the country, so to does it take less time to transmit data nearby than across the world. One of the ways that a CDN improves page performance is by copying assets onto servers across the world. Then, when a client makes a request, they receive the static assets from the nearest server (often called an edge node), decreasing load times. One of the advantages to using a CDN with a Django site is decoupling the global distribution of your static assets from the global distribution of your code.

    Client-Side Caching

    What’s better than having a static file on a server near your user? Having the static file already stored on your user’s device! Caching is the process of storing the results of a computation or request so that they can be accessed repeatedly more quickly. Just like a CSS stylesheet can be cached around the world in a CDN, it can be cached in the client’s browser the first time they load a page from your site. Then, the stylesheet is available on the device itself in subsequent requests, so the client is making fewer requests, improving page load time, and decreasing bandwidth use.

    Browsers perform their own caching operations, but if your site enjoys substantial traffic, you can optimize your client-side caching behavior using Django’s cache framework.

    In Conclusion

    Again, my guiding philosophy is to use tools for what they’re best at. Single-server projects and small scalable deployments with only lightweight static assets can use Django’s built-in static asset management, but most applications should separate out assets to be served over a CDN.

    If your project is intended for any kind of real-word use, do not store media files with Django’s default method, instead use a service. With enough traffic, where “enough traffic” is a relatively small number on the scale of the Internet, the additional complications to architecture, the development process, and deployment are more than worth it for the performance, reliability, and cost savings of using a separate CDN and file storage solution for static and media files, respectively.

    Recommended Reading

    • Part 1: User Models And Authentication
    • Part 2: Templating Saves Lines
    • Part 3: Models, Admin, And Harnessing The Relational Database
    Smashing Editorial
    (dm, ra, yk, il)

    Source link

    web design

    From Static Sites To End User JAMstack Apps With FaunaDB — Smashing Magazine

    06/09/2020

    About The Author

    Bryan is a designer, developer, and educator with a passion for CSS and static sites. He actively works to mentor and teach developers and designers the value …
    More about
    Bryan
    Robinson

    To make the move from “site” to app, we’ll need to dive into the world of “app-generated” content. In this article, we’ll get started in this world with the power of serverless data. We’ll start with a simple demo by ingesting and posting data to FaunaDB and then extend that functionality in a full-fledged application using Auth0, FaunaDB’s Token system and User-Defined Functions.

    The JAMstack has proven itself to be one of the top ways of producing content-driven sites, but it’s also a great place to house applications, as well. If you’ve been using the JAMstack for your performant websites, the demos in this article will help you extend those philosophies to applications as well.

    When using the JAMstack to build applications, you need a data service that fits into the most important aspects of the JAMstack philosophy:

    • Global distribution
    • Zero operational needs
    • A developer-friendly API.

    In the JAMstack ecosystem there are plenty of software-as-a-service companies that provide ways of getting and storing specific types of data. Whether you want to send emails, SMS or make phone calls (Twilio) or accept form submissions efficiently (Formspree, Formingo, Formstack, etc.), it seems there’s an API for almost everything.

    These are great services that can do a lot of the low-level work of many applications, but once your data is more complex than a spreadsheet or needs to be updated and store in real-time, it might be time to look into a database.

    The service API can still be in use, but a central database managing the state and operations of your app becomes much more important. Even if you need a database, you still want it to follow the core JAMstack philosophies we outlined above. That means, we don’t want to host our own database server. We need a Database-as-a-Service solution. Our database needs to be optimized for the JAMstack:

    • Optimized for API calls from a browser or build process.
    • Flexible to model your data in the specific ways your app needs.
    • Global distribution of our data like a CDN houses our sites.
    • Hands-free scaling with no need of a database administrator or developer intervention.

    Whatever service you look into needs to follow these tenets of serverless data. In our demos, we’ll explore FaunaDB, a global serverless database, featuring native GraphQL to assure that we keep our apps in step with the philosophies of the JAMstack.

    Let’s dive into the code!

    A JAMstack Guestbook App With Gatsby And Fauna

    I’m a big fan of reimagining the internet tools and concepts of the 1990s and early 2000s. We can take these concepts and make them feel fresh with the new set of tools and interactions.

    guestbook-form-and-signature
    A look at the app we’re creating. A signature form with a signature list below. The form will populate a FaunaDB database and that database will create the view list. (Large preview)

    In this demo, we’ll create an application that was all the rage in that time period: the guestbook. A guestbook is nothing but app-generated content and interaction. A user can come to the site, see all the signatures of past “guests” and then leave their own.

    To start, we’ll statically render our site and build our data from Fauna during our build step. This will provide the fast performance we expect from a JAMstack site. To do this, we’ll use GatsbyJS.

    Initial setup

    Our first step will be to install Gatsby globally on our computer. If you’ve never spent much time in the command line, Gatsby’s “part 0” tutorial will help you get up and running. If you already have Node and NPM installed, you’ll install the Gatsby CLI globally and create a new site with it using the following commands:

    npm install -g gatsby-cli
    gatsby new <directory-to-install-into> <starter>

    Gatsby comes with a large repository of starters that can help bootstrap your project. For this demo, I chose a simple starter that came equipped with the Bulma CSS framework.

    gatsby new guestbook-app https://github.com/amandeepmittal/gatsby-bulma-quickstart

    This gives us a good starting point and structure. It also has the added benefit of coming with styles that are ready to go.

    Let’s do a little cleanup for things we don’t need. We’ll start by simplifying our components.header.js

    import React from 'react';
    
    import './style.scss';
    
    const Header = ({ siteTitle }) => (
      <section className="hero gradientBg ">
        <div className="hero-body">
          <div className="container container--small center">
            <div className="content">
              <h1 className="is-uppercase is-size-1 has-text-white">
                Sign our Virtual Guestbook
              </h1>
              <p className="subtitle has-text-white is-size-3">
                If you like all the things that we do, be sure to sign our virtual guestbook
              </p>
            </div>
          </div>
        </div>
      </section>
    );
    
    export default Header;
    

    This will get rid of much of the branded content. Feel free to customize this section, but we won’t write any of our code here.

    Next we’ll clean out the components/midsection.js file. This will be where our app’s code will render.

    import React, { useState } from 'react';
    import Signatures from './signatures';
    import SignForm from './sign-form';
    
    
    const Midsection = () => {
    
        const [sigData, setSigData] = useState(data.allSignatures.nodes);
        return (
            <section className="section">
                <div className="container container--small">
                    <section className="section is-small">
                        <h2 className="title is-4">Sign here</h2>
                        <SignForm></SignForm>
                    </section>
    
                    <section className="section">
                        <h2 className="title is-5">View Signatures</h2>
                        <Signatures></Signatures>
                    </section>
                </div>
            </section>
        )
    }
    
    export default Midsection;
    

    In this code, we’ve mostly removed the “site” content and added in a couple new components. A <SignForm> that will contain our form for submitting a signature and a <Signatures> component to contain the list of signatures.

    Now that we have a relatively blank slate, we can set up our FaunaDB database.

    Setting Up A FaunaDB Collection

    After logging into Fauna (or signing up for an account), you’ll be given the option to create a new Database. We’ll create a new database called guestbook.

    signatures Collection
    The initial state of our signatures Collection after we add our first Document. (Large preview)

    Inside this database, we’ll create a “Collection” called signature. Collections in Fauna a group of Documents that are in turn JSON objects.

    In this new Collection, we’ll create a new Document with the following JSON:

    {
     name: "Bryan Robinson",
     message:
       "Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum"
    }

    This will be the simple data schema for each of our signatures. For each of these Documents, Fauna will create additional data surrounding it.

    {
     "ref": Ref(Collection("signatures"), "262884172900598291"),
     "ts": 1586964733980000,
     "data": {
       "name": "Bryan Robinson",
       "message": "Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum Lorem ipsum dolor amet sum "
     }
    }

    The ref is the unique identifier inside of Fauna and the ts is the time (as a Unix timestamp) the document was created/updated.

    After creating our data, we want an easy way to grab all that data and use it in our site. In Fauna, the most efficient way to get data is via an Index. We’ll create an Index called allSignatures. This will grab and return all of our signature Documents in the Collection.

    Now that we have an efficient way of accessing the data in Gatsby, we need Gatsby to know where to get it. Gatsby has a repository of plugins that can fetch data from a variety of sources, Fauna included.

    Setting up the Fauna Gatsby Data Source Plugin

    npm install gatsby-source-faunadb

    After we install this plugin to our project, we need to configure it in our gatsby-config.js file. In the plugins array of our project, we’ll add a new item.

    {
        resolve: `gatsby-source-faunadb`,
        options: {
        // The secret for the key you're using to connect to your Fauna database.
        // You can generate on of these in the "Security" tab of your Fauna Console.
            secret: process.env.YOUR_FAUNADB_SECRET,
        // The name of the index you want to query
        // You can create an index in the "Indexes" tab of your Fauna Console.
            index: `allSignatures`,
        // This is the name under which your data will appear in Gatsby GraphQL queries
        // The following will create queries called `allBird` and `bird`.
            type: "Signatures",
        // If you need to limit the number of documents returned, you can specify a 
        // Optional maximum number to read.
        // size: 100
        },
    },
    

    In this configuration, you provide it your Fauna secret Key, the Index name we created and the “type” we want to access in our Gatsby GraphQL query.

    Where did that process.env.YOUR_FAUNADB_SECRET come from?

    In your project, create a .env file — and include that file in your .gitignore! This file will give Gatsby’s Webpack configuration the secret value. This will keep your sensitive information safe and not stored in GitHub.

    YOUR_FAUNADB_SECRET = "value from fauna"

    We can then head over to the “Security” tab in our Database and create a new key. Since this is a protected secret, it’s safe to use a “Server” role. When you save the Key, it’ll provide your secret. Be sure to grab that now, as you can’t get it again (without recreating the Key).

    Once the configuration is set up, we can write a GraphQL query in our components to grab the data at build time.

    Getting the data and building the template

    We’ll add this query to our Midsection component to make it accessible by both of our components.

    const Midsection = () => {
     const data = useStaticQuery(
     graphql`
                query GetSignatures {
                    allSignatures {
                      nodes {
                        name
                        message
                        _ts
                        _id
                      }
                    }
                }`
            );
    // ... rest of the component
    }

    This will access the Signatures type we created in the configuration. It will grab all the signatures and provide an array of nodes. Those nodes will contain the data we specify we need: name, message, ts, id.

    We’ll set that data into our state — this will make updating it live easier later.

    const [sigData, setSigData] = useState(data.allSignatures.nodes);

    Now we can pass sigData as a prop into <Signatures> and setSigData into <SignForm>.

    <SignForm setSigData={setSigData}></SignForm>
    
    
    <Signatures sigData={sigData}></Signatures>

    Let’s set up our Signatures component to use that data!

    import React from 'react';
    import Signature from './signature'   
    
    const Signatures = (props) => {
        const SignatureMarkup = () => {
            return props.sigData.map((signature, index) => {
                return (
                    <Signature key={index} signature={signature}></Signature>
                )
            }).reverse()
        }
    
        return (
            <SignatureMarkup></SignatureMarkup>
        )
    }
    
    export default Signatures
    

    In this function, we’ll .map() over our signature data and create an Array of markup based on a new <Signature> component that we pass the data into.

    The Signature component will handle formatting our data and returning an appropriate set of HTML.

    import React from 'react';
    
    const Signature = ({signature}) => {
        const dateObj = new Date(signature._ts / 1000);
        let dateString = `${dateObj.toLocaleString('default', {weekday: 'long'})}, ${dateObj.toLocaleString('default', { month: 'long' })} ${dateObj.getDate()} at ${dateObj.toLocaleTimeString('default', {hour: '2-digit',minute: '2-digit', hour12: false})}`
    
        return (
        <article className="signature box">      
            <h3 className="signature__headline">{signature.name} - {dateString}</h3>
            <p className="signature__message">
                {signature.message} 
            </p>
        </article>
    )};
    
    export default Signature;
    

    At this point, if you start your Gatsby development server, you should have a list of signatures currently existing in your database. Run the following command to get up and running:

    gatsby develop

    Any signature stored in our database will build HTML in that component. But how can we get signatures INTO our database?

    Let’s set up a signature form component to send data and update our Signatures list.

    Let’s Make Our JAMstack Guestbook Interactive

    First, we’ll set up the basic structure for our component. It will render a simple form onto the page with a text input, a textarea, and a button for submission.

    import React from 'react';
    
    import faunadb, { query as q } from "faunadb"
    
    var client = new faunadb.Client({ secret: process.env.GATSBY_FAUNA_CLIENT_SECRET  })
    
    export default class SignForm extends React.Component {
        constructor(props) {
            super(props)
            this.state = {
                sigName: "",
                sigMessage: ""
            }
        }
    
        handleSubmit = async event => {
            // Handle the submission
        }
    
        handleInputChange = event => {
            // When an input changes, update the state
        }
    
        render() {
            return (
                <form onSubmit={this.handleSubmit}>
                    <div className="field">
                        <div className="control">
                     <label className="label">Label
                        <input 
                            className="input is-fullwidth"
                            name="sigName" 
                            type="text"
                            value={this.state.sigName}
                            onChange={this.handleInputChange}
                        />
                        </label>
                        </div>
                    </div>
                    <div className="field">
                        <label>
                            Your Message:
                            <textarea 
                                rows="5"
                                name="sigMessage" 
                                value={this.state.sigMessage}
                                onChange={this.handleInputChange} 
                                className="textarea" 
                                placeholder="Leave us a happy note"></textarea>
    
                        </label>
                    </div>
                    <div className="buttons">
                        <button className="button is-primary" type="submit">Sign the Guestbook</button>
                    </div>
                </form>
            )
        }
    
    }
    

    To start, we’ll set up our state to include the name and the message. We’ll default them to blank strings and insert them into our <textarea> and <input>.

    When a user changes the value of one of these fields, we’ll use the handleInputChange method. When a user submits the form, we’ll use the handleSubmit method.

    Let’s break down both of those functions.

      handleInputChange = event => {
     const target = event.target
     const value = target.value
     const name = target.name
     this.setState({
                [name]: value,
            })
        }
    

    The input change will accept the event. From that event, it will get the current target’s value and name. We can then modify the state of the properties on our state object — sigName, sigMessage or anything else.

    Once the state has changed, we can use the state in our handleSubmit method.

      handleSubmit = async event => {
            event.preventDefault();
     const placeSig = await this.createSignature(this.state.sigName, this.state.sigMessage);
     this.addSignature(placeSig);
        }
    

    This function will call a new createSignature() method. This will connect to Fauna to create a new Document from our state items.

    The addSignature() method will update our Signatures list data with the response we get back from Fauna.

    In order to write to our database, we’ll need to set up a new key in Fauna with minimal permissions. Our server key is allowed higher permissions because it’s only used during build and won’t be visible in our source.

    This key needs to only allow for the ability to only create new items in our signatures collection.

    Note: A user could still be malicious with this key, but they can only do as much damage as a bot submitting that form, so it’s a trade-off I’m willing to make for this app.

    signatures-client-permissions
    A look at the FaunaDB security panel. In this shot, we’re creating a ‘client’ role that allows only the ‘Create’ permission for those API Keys. (Large preview)

    For this, we’ll create a new “Role” in the “Security” tab of our dashboard. We can add permissions around one or more of our Collections. In this demo, we only need signatures and we can select the “Create” functionality.

    After that, we generate a new key that uses that role.

    To use this key, we’ll instantiate a new version of the Fauna JavaScript SDK. This is a dependency of the Gatsby plugin we installed, so we already have access to it.

    import faunadb, { query as q } from "faunadb"
    
    var client = new faunadb.Client({ secret: process.env.GATSBY_FAUNA_CLIENT_SECRET })

    By using an environment variable prefixed with GATSBY_, we gain access to it in our browser JavaScript (be sure to add it to your .env file).

    By importing the query object from the SDK, we gain access to any of the methods available in Fauna’s first-party Fauna Query Language (FQL). In this case, we want to use the Create method to create a new document on our Collection.

    createSignature = async (sigName, sigMessage) => {
     try {
     const queryResponse = await client.query(
                    q.Create(
                        q.Collection('signatures'),
                        { 
                            data: { 
                                name: sigName,
                                message: sigMessage
                            } 
                        }
                    )
                )
     const signatureInfo = { name: queryResponse.data.name, message: queryResponse.data.message, _ts: queryResponse.ts, _id: queryResponse.id}
     return signatureInfo
            } catch(err) {
                console.log(err);
            }
        }

    We pass the Create function to the client.query() method. Create takes a Collection reference and an object of information to pass to a new Document. In this case, we use q.Collection and a string of our Collection name to get the reference to the Collection. The second argument is for our data. You can pass other items in the object, so we need to tell Fauna, we’re specifically sending it the data property on that object.

    Next, we pass it the name and message we collected in our state. The response we get back from Fauna is the entire object of our Document. This includes our data in a data object, as well as a Fauna ID and timestamp. We reformat that data in a way that our Signatures list can use and return that back to our handleSubmit function.

    Our submit handler will then pass that data into our setSigData prop which will notify our Signatures component to rerender with that new data. This gives our user immediate feedback that their submission has been accepted.

    Rebuilding the site

    This is all working in the browser, but the data hasn’t been updated in our static application yet.

    From here, we need to tell our JAMstack host to rebuild our site. Many have the ability to specify a webhook to trigger a deployment. Since I’m hosting this demo on Netlify, I can create a new “Deploy webhook” in their admin and create a new triggerBuild function. This function will use the native JavaScript fetch() method and send a post request to that URL. Netlify will then rebuild the application and pull in the latest signatures.

      triggerBuild = async () => {
     const response = await fetch(process.env.GATSBY_BUILD_HOOK, { method: "POST", body: "{}" });
     return response;
        }

    Both Gatsby Cloud and Netlify have implemented ways of handling “incremental” builds with Gatsby drastically speeding up build times. This sort of build can happen very quickly now and feel almost as fast as a traditional server-rendered site.

    Every signature that gets added gets quick feedback to the user that it’s been submitted, is perpetually stored in a database, and served as HTML via a build process.

    Still feels a little too much like a typical website? Let’s take all these concepts a step further.

    Create A Mindful App With Auth0, Fauna Identity And Fauna User-Defined Functions (UDF)

    Being mindful is an important skill to cultivate. Whether it’s thinking about your relationships, your career, your family, or just going for a walk in nature, it’s important to be mindful of the people and places around you.

    Mindful Mission
    A look at the final app screen showing a ‘Mindful Mission,’ ‘Past Missions’ and a ‘Log Out’ button. (Large preview)

    This app intends to help you focus on one randomized idea every day and review the various ideas from recent days.

    To do this, we need to introduce a key element to most apps: authentication. With authentication, comes extra security concerns. While this data won’t be particularly sensitive, you don’t want one user accessing the history of any other user.

    Since we’ll be scoping data to a specific user, we also don’t want to store any secret keys on browser code, as that would open up other security flaws.

    We could create an entire authentication flow using nothing but our wits and a user database with Fauna. That may seem daunting and moves us away from the features we want to write. The great thing is that there’s certainly an API for that in the JAMstack! In this demo, we’ll explore integrating Auth0 with Fauna. We can use the integration in many ways.

    Setting Up Auth0 To Connect With Fauna

    Many implementations of authentication with the JAMstack rely heavily on Serverless functions. That moves much of the security concerns from a security-focused company like Auth0 to the individual developer. That doesn’t feel quite right.

    serverless function flow
    A diagram outlining the convoluted method of using a serverless function to manage authentication and token generation. (Large preview)

    The typical flow would be to send a login request to a serverless function. That function would request a user from Auth0. Auth0 would provide the user’s JSON Web Token (JWT) and the function would provide any additional information about the user our application needs. The function would then bundle everything up and send it to the browser.

    There are a lot of places in that authentication flow where a developer could introduce a security hole.

    Instead, let’s request that Auth0 bundle everything up for us inside the JWT it sends. Keeping security in the hands of the folks who know it best.

    Auth0’s Rule flow
    A diagram outlining the streamlined authentication and token generation flow when using Auth0’s Rule system. (Large preview)

    We’ll do this by using Auth0’s Rules functionality to ask Fauna for a user token to encode into our JWT. This means that unlike our Guestbook, we won’t have any Fauna keys in our front-end code. Everything will be managed in memory from that JWT token.

    Setting up Auth0 Application and Rule

    First, we’ll need to set up the basics of our Auth0 Application.

    Following the configuration steps in their basic walkthrough gets the important basic information filled in. Be sure to fill out the proper localhost port for your bundler of choice as one of your authorized domains.

    After the basics of the application are set up, we’ll go into the “Rules” section of our account.

    Click “Create Rule” and select “Empty Rule” (or start from one of their many templates that are helpful starting points).

    Here’s our Rule code

    async function (user, context, callback) {
      const FAUNADB_SECRET = 'Your Server secret';
      const faunadb = require('faunadb@2.11.1');
      const { query: q } = faunadb;
      const client = new faunadb.Client({ secret: FAUNADB_SECRET });
      try {
        const token = await client.query(
          q.Call('user_login_or_create', user.email, user) // Call UDF in fauna
        );
        let newClient = new faunadb.Client({ secret: token.secret });
    
        context.idToken['https://faunadb.com/id/secret'] = token.secret;
        callback(null, user, context);
      } catch(error) {
        console.log('->', error);
        callback(error, user, context);
      }
    }
    

    We give the rule a function that takes the user, context, and a callback from Auth0. We need to set up and grab a Server token to initialize our Fauna JavaScript SDK and initialize our client. Just like in our Guestbook, we’ll create a new Database and manage our Tokens in “Security”.

    From there, we want to send a query to Fauna to create or log in our user. To keep our Rule code simple (and make it reusable), we’ll write our first Fauna “User-Defined Function” (UDF). A UDF is a function written in FQL that runs on Fauna’s infrastructure.

    First, we’ll set up a Collection for our users. You don’t need to make a first Document here, as they’ll be created behind the scenes by our Auth0 rule whenever a new Auth0 user is created.

    Next, we need an Index to search our users Collection based on the email address. This Index is simpler than our Guestbook, so we can add it to the Dashboard. Name the Index user_by_email, set the Collection to users, and the Terms to data.email. This will allow us to pass an email address to the Index and get a matching user Document back.

    It’s time to create our UDF. In the Dashboard, navigate to “Functions” and create a new one named user_login_or_create.

    Query(
      Lambda(
        ["userEmail", "userObj"], // Arguments
        Let(
          { user: Match(Index("user_by_email"), Var("userEmail")) }, // Set user variable 
          If(
            Exists(Var("user")), // Check if the User exists
            Create(Tokens(null), { instance: Select("ref", Get(Var("user"))) }), // Return a token for that item in the users collection (in other words, the user)
            Let( // Else statement: Set a variable
              {
                newUser: Create(Collection("users"), { data: Var("userObj") }), // Create a new user and get its reference
                token: Create(Tokens(null), { // Create a token for that user
                  instance: Select("ref", Var("newUser"))
                })
              },
              Var("token") // return the token
            )
          )
        )
      )
    )
    

    Our UDF will accept a user email address and the rest of the user information. If a user exists in a users Collection, it will create a Token for the user and send that back. If a user doesn’t exist, it will create that user Document and then send a Token to our Auth0 Rule.

    We can then store the Token as an idToken attached to the context in our JWT. The token needs a URL as a key. Since this is a Fauna token, we can use a Fauna URL. Whatever this is, you’ll use it to access this in your code.

    This Token doesn’t have any permissions yet. We need to go into our Security rules and set up a new Role.

    We’ll create an “AuthedUser” role. We don’t need to add any permissions yet, but as we create new UDFs and new Collections, we’ll update the permissions here. Instead of generating a new Key to use this Role, we want to add to this Role’s “Memberships”. On the Memberships screen, you can select a Collection to add as a member. The documents in this Collection (in our case, our Users), will have the permissions set on this role given via their Token.

    Now, when a user logs in via Auth0, they’ll be returned a Token that matches their user Document and has its permissions.

    From here, we come back to our application.

    Implement logic for when the User is logged in

    Auth0 has an excellent walkthrough for setting up a “vanilla” JavaScript single-page application. Most of this code is a refactor of that to fit the code splitting of this application.

    default Auth0 Login/Signup screen
    The default Auth0 Login/Signup screen. All the login flow can be contained in the Auth0 screens. (Large preview)

    First, we’ll need the Auth0 SPA SDK.

    npm install @auth0/auth0-spa-js
    import createAuth0Client from '@auth0/auth0-spa-js';
    import { changeToHome } from './layouts/home'; // Home Layout
    import { changeToMission } from './layouts/myMind'; // Current Mindfulness Mission Layout
    
    let auth0 = null;
    var currentUser = null;
    const configureClient = async () => {
        // Configures Auth0 SDK
        auth0 = await createAuth0Client({
          domain: "mindfulness.auth0.com",
          client_id: "32i3ylPhup47PYKUtZGRnLNsGVLks3M6"
        });
    };
    
    const checkUser = async () => {
        // return user info from any method
        const isAuthenticated = await auth0.isAuthenticated();
        if (isAuthenticated) {
            return await auth0.getUser();
        }
    }
    
    const loadAuth = async () => {
        // Loads and checks auth
        await configureClient();      
        
        const isAuthenticated = await auth0.isAuthenticated();
        if (isAuthenticated) {
            // show the gated content
            currentUser = await auth0.getUser();
            changeToMission(); // Show the "Today" screen
            return;
        } else {
            changeToHome(); // Show the logged out "homepage"
        }
    
        const query = window.location.search;
        if (query.includes("code=") && query.includes("state=")) {
    
            // Process the login state
            await auth0.handleRedirectCallback();
           
            currentUser = await auth0.getUser();
            changeToMission();
    
            // Use replaceState to redirect the user away and remove the querystring parameters
            window.history.replaceState({}, document.title, "/");
        }
    }
    
    const login = async () => {
        await auth0.loginWithRedirect({
            redirect_uri: window.location.origin
        });
    }
    const logout = async () => {
        auth0.logout({
            returnTo: window.location.origin
        });
        window.localStorage.removeItem('currentMindfulItem') 
        changeToHome(); // Change back to logged out state
    }
    
    export { auth0, loadAuth, currentUser, checkUser, login, logout }
    

    First, we configure the SDK with our client_id from Auth0. This is safe information to store in our code.

    Next, we set up a function that can be exported and used in multiple files to check if a user is logged in. The Auth0 library provides an isAuthenticated() method. If the user is authenticated, we can return the user data with auth0.getUser().

    We set up a login() and logout() functions and a loadAuth() function to handle the return from Auth0 and change the state of our UI to the “Mission” screen with today’s Mindful idea.

    Once this is all set up, we have our authentication and user login squared away.

    We’ll create a new function for our Fauna functions to reference to get the proper token set up.

    const AUTH_PROP_KEY = "https://faunad.com/id/secret";
    var faunadb = require('faunadb'),
    q = faunadb.query;
    
    async function getUserClient(currentUser) {
        return new faunadb.Client({ secret: currentUser[AUTH_PROP_KEY]})
    }
    

    This returns a new connection to Fauna using our Token from Auth0. This token works the same as the Keys from previous examples.

    Generate a random Mindful topic and store it in Fauna

    To start, we need a Collection of items to store our list of Mindful objects. We’ll create a Collection called “mindful” things, and create a number of items with the following schema:

    {
       "title": "Career",
       "description": "Think about the next steps you want to make in your career. What’s the next easily attainable move you can make?",
       "color": "#C6D4FF",
       "textColor": "black"
     }

    From here, we’ll move to our JavaScript and create a function for adding and returning a random item from that Collection.

    async function getRandomMindfulFromFauna(userObj) {
        const client = await getUserClient(userObj);
    
        try {
            let mindfulThings = await client.query(
                q.Paginate(
                    q.Documents(q.Collection('mindful_things'))
                )
            )
            let randomMindful = mindfulThings.data[Math.floor(Math.random()*mindfulThings.data.length)];
            let creation = await client.query(q.Call('addUserMindful', randomMindful));
            
            return creation.data.mindful;
    
        } catch (error) {
            console.log(error)
        }   
    }
    

    To start, we’ll instantiate our client with our getUserClient() method.

    From there, we’ll grab all the Documents from our mindful_things Collection. Paginate() by default grabs 64 items per page, which is more than enough for our data. We’ll grab a random item from the array that’s returned from Fauna. This will be what Fauna refers to as a “Ref”. A Ref is a full reference to a Document that the various FQL functions can use to locate a Document.

    We’ll pass that Ref to a new UDF that will handle storing a new, timestamped object for that user stored in a new user_things Collection.

    We’ll create the new Collection, but we’ll have our UDF provide the data for it when called.

    We’ll create a new UDF in the Fauna dashboard with the name addUserMindful that will accept that random Ref.

    As with our login UDF before, we’ll use the Lambda() FQL method which takes an array of arguments.

    Without passing any user information to the function, FQL is able to obtain our User Ref just calling the Identity() function. All we have from our randomRef is the reference to our Document. We’ll run a Get() to get the full object. We’ll the Create() a new Document in the user_things Collection with our User Ref and our random information.

    We then return the creation object back out of our Lambda. We then go back to our JavaScript and return the data object with the mindful key back to where this function gets called.

    Render our Mindful Object on the page

    When our user is authenticated, you may remember it called a changeToMission() method. This function switches the items on the page from the “Home” screen to markup that can be filled in by our data. After it’s added to the page, the renderToday() function gets called to add content by a few rules.

    The first rule of Serverless Data Club is not to make HTTP requests unless you have to. In other words, cache when you can. Whether that’s creating a full PWA-scale application with Service Workers or just caching your database response with localStorage, cache data, and fetch only when necessary.

    The first rule of our conditional is to check localStorage. If localStorage does contain a currentMindfulItem, then we need to check its date to see if it’s from today. If it is, we’ll render that and make no new requests.

    The second rule of Serverless Data Club is to make as few requests as possible without the responses of those requests being too large. In that vein, our second conditional rule is to check the latest item from the current user and see if it is from today. If it is, we’ll store it in localStorage for later and then render the results.

    Finally, if none of these are true, we’ll fire our getRandomMindfulFromFauna() function, format the result, store that in localStorage, and then render the result.

    Get the latest item from a user

    I glossed over it in the last section, but we also need some functionality to retrieve the latest mindful object from Fauna for our specific user. In our getLatestFromFauna() method, we’ll again instantiate our Fauna client and then call a new UDF.

    Our new UDF is going to call a Fauna Index. An Index is an efficient way of doing a lookup on a Fauna database. In our case, we want to return all user_things by the user field. Then we can also sort the result by timestamp and reverse the default ordering of the data to show the latest first.

    Simple Indexes can be created in the Index dashboard. Since we want to do the reverse sort, we’ll need to enter some custom FQL into the Fauna Shell (you can do this in the database dashboard Shell section).

    CreateIndex({
      name: "getMindfulByUserReverse",
      serialized: true,
      source: Collection("user_things"),
      terms: [
        {
          field: ["data", "user"]
        }
      ],
      values: [
        {
          field: ["ts"],
          reverse: true
        },
        {
          field: ["ref"]
        }
      ]
    })
    

    This creates an Index named getMindfulByUserReverse, created from our user_thing Collection. The terms object is a list of fields to search by. In our case, this is just the user field on the data object. We then provide values to return. In our case, we need the Ref and the Timestamp and we’ll use the reverse property to reverse order our results by this field.

    We’ll create a new UDF to use this Index.

    Query(
      Lambda(
        [],
        If( // Check if there is at least 1 in the index
          GT(
            Count(
              Select(
                "data",
                Paginate(Match(Index("getMindfulByUserReverse"), Identity()))
              )
            ),
            0
          ),
          Let( // if more than 0
            {
              match: Paginate(
                Match(Index("getMindfulByUserReverse"), Identity()) // Search the index by our User
              ),
              latestObj: Take(1, Var("match")), // Grab the first item from our match
              latestRef: Select(
                ["data"],
                Get(Select(["data", 0, 1], Var("latestObj"))) // Get the data object from the item
              ),
              latestTime: Select(["data", 0, 0], Var("latestObj")), // Get the time
              merged: Merge( // merge those items into one object to return
                { latestTime: Var("latestTime") },
                { latestMindful: Var("latestRef") }
              )
            },
            Var("merged")
          ),
          Let({ error: { err: "No data" } }, Var("error")) // if there aren't any, return an error.
        )
      )
    )
    

    This time our Lambda() function doesn’t need any arguments since we’ll have our User based on the Token used.

    First, we’ll check to see if there’s at least 1 item in our Index. If there is, we’ll grab the first item’s data and time and return that back as a merged object.

    After we get the latest from Fauna in our JavaScript, we’ll format it to a structure our storeCurrent() and render() methods expect it and return that object.

    Now, we have an application that creates, stores, and fetches data for a daily message to contemplate. A user can use this on their phone, on their tablet, on the computer, and have it all synced. We could turn this into a PWA or even a native app with a system like Ionic.

    We’re still missing one feature. Viewing a certain number of past items. Since we’ve stored this in our database, we can retrieve them in whatever way we need to.

    Pull the latest X Mindful Missions to get a picture of what you’ve thought about

    We’ll create a new JavaScript method paired with a new UDF to tackle this.

    getSomeFromFauna will take an integer count to ask Fauna for a certain number of items.

    Our UDF will be very similar to the getLatestFromFauana UDF. Instead of returning the first item, we’ll Take() the number of items from our array that matches the integer that gets passed into our UDF. We’ll also begin with the same conditional, in case a user doesn’t have any items stored yet.

    Query(
      Lambda(
        ["count"], // Number of items to return
        If( // Check if there are any objects
          GT( 
            Count(
              Select(
                "data",
                Paginate(Match(Index("getMindfulByUserReverse"), Identity(null)))
              )
            ),
            0
          ),
          Let(
            {
              match: Paginate(
                Match(Index("getMindfulByUserReverse"), Identity(null)) // Search the Index by our User
              ),
              latestObjs: Select("data", Take(Var("count"), Var("match"))), // Get the data that is returned
              mergedObjs: Map( // Loop over the objects
                Var("latestObjs"),
                Lambda(
                  "latestArray",
                  Let( // Build the data like we did in the LatestMindful function
                    {
                      ref: Select(["data"], Get(Select([1], Var("latestArray")))),
                      latestTime: Select(0, Var("latestArray")),
                      merged: Merge(
                        { latestTime: Var("latestTime") },
                        Select("mindful", Var("ref"))
                      )
                    },
                    Var("merged") // Return this to our new array
                  )
                )
              )
            },
            Var("mergedObjs") // return the full array
          ),
          { latestMindful: [{ title: "No additional data" }] } // if there are no items, send back a message to display
        )
      )
    )
    

    In this demo, we created a full-fledged app with serverless data. Because the data is served from a CDN, it can be as close to a user as possible. We used FaunaDB’s features, such as UDFs and Indexes, to optimize our database queries for speed and ease of use. We also made sure we only queried our database the bare minimum to reduce requests.

    Where To Go With Serverless Data

    The JAMstack isn’t just for sites. It can be used for robust applications as well. Whether that’s for a game, CRUD application or just to be mindful of your surroundings you can do a lot without sacrificing customization and without spinning up your own non-dist database system.

    With performance on the mind of everyone creating on the JAMstack — whether for cost or for user experience — finding a good place to store and retrieve your data is a high priority. Find a spot that meets your needs, those of your users, and meets ideals of the JAMstack.

    Smashing Editorial
    (ra, yk, il)

    Source link