Browsing Tag: Serverless

    web design

    Choosing A New Serverless Database Technology At An Agency (Case Study) — Smashing Magazine

    03/30/2021

    About The Author

    Michael is a full stack engineer who is passionate about solving real business problems with code. He is a Lead Software Engineer at The Knot Worldwide as well …
    More about
    Michael

    Choosing to use a new technology can often bring much desired productivity, security, and efficiency to a project. It is also fraught with risk and uncertainty. How and when to adopt a new technology for client projects is at the heart of leading a great agency. In this article, Michael Rispoli explains how he evaluated the decision of whether or not to adopt a serverless database for client projects.

    Adopting a new technology is one of the hardest decisions for a technologist in a leadership role. This is often a large and uncomfortable area of risk, whether you are building software for another organization or within your own.

    Over the last twelve years as a software engineer, I’ve found myself in the position of having to evaluate a new technology at increasing frequency. This may be the next frontend framework, a new language, or even entirely new architectures like serverless.

    The experimentation phase is often fun and exciting. It is where software engineers are most at home, embracing the novelty and euphoria of “aha” moments while grokking new concepts. As engineers, we like to think and tinker, but with enough experience, every engineer learns that even the most incredible technology has its blemishes. You just haven’t found them yet.

    Now, as the co-founder of a creative agency, my team and I are often in a unique position to use new technologies. We see many greenfield projects, which become the perfect opportunity to introduce something new. These projects also see a level of technical isolation from the larger organization and are often less burdened by prior decisions.

    That being said, a good agency lead is entrusted to care for someone else’s big idea and deliver it to the world. We have to treat it with even more care than we would our own projects. Whenever I’m about to make the final call on a new technology I often ponder this piece of wisdom from the co-founder Stack Overflow Joel Spolski:

    “You have to sweat and bleed with the thing for a year or two before you really know it’s good enough or realize that no matter how hard you try you can’t…”

    This is the fear, this is the place that no tech lead wants to find themselves in. Choosing a new technology for a real-world project is hard enough, but as an agency, you have to make these decisions with someone else’s project, someone else’s dream, someone else’s money. At an agency, the last thing you want is to find one of those blemishes near the deadline for a project. Tight timelines and budgets make it nearly impossible to reverse course after a certain threshold is crossed, so finding out a technology can’t do something critical or is unreliable too late into a project can be catastrophic.

    Throughout my career as a software engineer, I’ve worked at SaaS companies and creative agencies. When it comes to adopting a new technology for a project these two environments have very different criteria. There is overlap in criteria, but by and large, the agency environment has to work with rigid budgets and rigorous time constraints. While we want the products we build to age well over time, it’s often more difficult to make investments in something less proven or to adopt technology with steeper learning curves and rough edges.

    That being said, agencies also have some unique constraints that a single organization may not have. We have to bias for efficiency and stability. The billable hour is often the final unit of measurement when a project is complete. I’ve been at SaaS companies where spending a day or two on setup or a build pipeline is no big deal.

    At an agency, this type of time cost puts strain on relationships as finance teams see narrowing profit margins for little visible results. We also have to consider the long-term maintenance of a project, and conversely what happens if a project needs to be handed back off to the client. We therefore must bias for efficiency, learning curve, and stability in the technology we choose.

    When evaluating a new piece of technology I look at three overarching areas:

    1. The Technology
    2. The Developer Experience
    3. The Business

    Each of these areas has a set of criteria I like met before I start really diving into the code and experimenting. In this article, we’ll take a look at these criteria and use the example of considering a new database for a project and review it at a high level under each lens. Taking a tangible decision like this will help demonstrate how we can apply this framework in the real world.

    The Technology

    The very first thing to take a look at when evaluating a new technology is if that solution can solve the problems it claims to solve. Before diving into how a technology can help our process and business operations, it’s important to first establish that it is meeting our functional requirements. This is also where I like to take a look at what existing solutions we are using and how this new one stacks up against them.

    I’ll ask myself questions like:

    1. Does it at a minimum solve the problem my existing solution does?
    2. In what ways is this solution better?
    3. In what ways is it worse?
    4. For areas that it is worse, what will it take to overcome those shortcomings?
    5. Will it take the place of multiple tools?
    6. How stable is the technology?

    Our Why?

    At this point, I also want to review why we are seeking another solution. A simple answer is we are encountering a problem that existing solutions don’t solve. However, this is often rarely the case. We have solved many software problems over the years with all of the technology we have today. What typically happens is that we get turned onto a new technology that makes something we are currently doing easier, more stable, faster, or cheaper.

    Let’s take React as an example. Why did we decide to adopt React when jQuery or Vanilla JavaScript was doing the job? In this case, using the framework highlighted how this was a much better way to handle stateful frontends. It became faster for us to build things like filtering and sorting features by working with data structures instead of direct DOM manipulation. This was a saving in time and increased stability of our solutions.

    Typescript is another example where we decided to adopt it because we found increases in the stability of our code and maintainability. With adopting new technologies, there often isn’t a clear problem we are looking to solve, but rather just looking to stay current and then discovering more efficient and stable solutions than we are currently using.

    In the case of a database, we were specifically considering moving to a serverless option. We had seen a lot of success with serverless applications and deployments reducing our overhead as an organization. One area where we felt this was lacking was our data layer. We saw services like Amazon Aurora, Fauna, Cosmos and Firebase that were applying serverless principles to databases and wanted to see if it was time to take the leap ourselves. In this case, we were looking to lower our operational overhead and increase our development speed and efficiency.

    It’s important at this level to understand your why before you start diving into new offerings. This may be because you are solving a novel problem, but far more often you are looking to improve your ability to solve a type of problem you are already solving. In that case, you need to take inventory of where you have been to figure out what would provide a meaningful improvement to your workflow. Building upon our example of looking at serverless databases, we’ll need to take a look at how we are currently solving problems and where those solutions fall short.

    Where we have been…

    As an agency, we have previously used a wide range of databases including but not limited to MySQL, PostgreSQL, MongoDB, DynamoDB, BigQuery, and Firebase Cloud Storage. The vast majority of our work centered around three core databases though: PostgreSQL, MongoDB, and Firebase Realtime Database. Each one of these does, in fact, have semi-serverless offerings, but some key features of newer offerings had us re-evaluating our previous assumptions. Let’s take a look at our historical experience with each of these first and why we are left considering alternatives in the first place.

    We typically chose PostgreSQL for larger, long-term projects, as this is the battle-tested gold standard for almost everything. It supports classic transactions, normalized data, and is ACID compliant. There are a wealth of tools and ORMs available in almost every language and it can even be used as an ad-hoc NoSQL database with its JSON column support. It integrates well with many existing frameworks, libraries and programming languages making it a true go-anywhere workhorse. It is also open-source and therefore doesn’t get us locked into any one vendor. As they say, nobody ever got fired for choosing Postgres.

    That being said, we have gradually found ourselves using PostgreSQL less and less as we became more of a Node-oriented shop. We have found the ORM’s for Node to be lackluster and requiring more custom queries (although this has become less problematic now) and NoSQL felt to be a more natural fit when working in a JavaScript or TypeScript runtime. That being said, we often had projects that could be done quite quickly with classic relational modeling like e-commerce workflows. However, dealing with the local setup of the database, unifying the testing flow across teams, and dealing with local migrations were things we didn’t love and were happy to leave behind as NoSQL, cloud-based databases became more popular.

    MongoDB was increasingly our go-to database as we adopted Node.js as our preferred back end. Working with MongoDB Atlas made it easy to have quick development and testing databases that our team could use. For a while, MongoDB was not ACID compliant, didn’t support transactions, and discouraged too many inner join-like operations, thus for e-commerce applications we still were using Postgres most often. That being said, there are a wealth of libraries that go with it and Mongo’s query language and first-class JSON support gave us speed and efficiency we had not experienced with relational databases. MongoDB has added support for ACID transactions recently, but for a long time, this was the chief reason we would opt for Postgres instead.

    MongoDB also introduced us to a new level of flexibility. In the middle of an agency project, requirements are bound to change. No matter how hard you defend against it, there is always a last-minute data requirement. With NoSQL databases, in general, the flexibility of the data structure made those types of changes less harsh. We didn’t end up with a folder full of migration files to manage that added and removed and added columns again before a project even saw daylight.

    As a service, Mongo Atlas was also pretty close to what we desired in a database cloud service. I like to think of Atlas as a semi-serverless offering since you still have some operational overhead in managing it. You have to provision a certain size database and select an amount of memory upfront. These things will not scale for you automatically so you will need to monitor it for when it is time to provide more space or memory. In a truly serverless database, this would all happen automatically and on-demand.

    We also utilized Firebase Realtime Database for a few projects. This was indeed a serverless offering where the database scales up and down on-demand, and with pay-as-you-go pricing, it made sense for applications where the scale was not known upfront and the budget was limited. We used this instead of MongoDB for short-lived projects that had simple data requirements.

    One thing we did not enjoy about Firebase was it felt to be further from the typical relational model built around normalized data that we were used to. Keeping the data structures flat meant we often had more duplication, which could turn a bit ugly as a project grows. You end up having to update the same data in multiple places or trying to join together different references resulting in multiple queries that can become hard to reason about in the code. While we liked Firebase, we never really fell in love with the query language and sometimes found the documentation to be lackluster.

    In general, both MongoDB and Firebase had a similar focus on denormalized data, and without access to efficient transactions, we often found many of the workflows that were easy to model in relational databases, which led to more complex code at the application layer with their NoSQL counterparts. If we could get the flexibility and ease of these NoSQL offerings with the robustness and relational modeling of a traditional SQL database we would really have found a great match. We felt MongoDB had the better API and capabilities but Firebase had the truly serverless model operationally.

    A Venn diagram showing three circles of technologies A, B and C having one same thing in common: ideal new solution of the features you like
    When looking at different technologies, our ideal solution’s feature set is going to live somewhere where these technologies overlap. This gets us all of what we love but also additional features that were previously tradeoffs. (Large preview)

    Our Ideal

    At this point, we can start looking at what new options we will consider. We’ve clearly defined our previous solutions and we’ve identified the things that are important for us to have at a minimum in our new solution. We not only have a baseline or minimum set of requirements, but we also have a set of problems that we’d like the new solution to alleviate for us. Here are the technical requirements we have:

    1. Serverless operationally with on-demand scale
    2. Flexible modeling (schemaless)
    3. No reliance on migrations or ORMs
    4. ACID compliant transactions
    5. Supports relationships and normalized data
    6. Works with both serverless and traditional backends

    So now that we have a list of must-haves we can actually evaluate some options. It may not be important that the new solution nails every target here. It may just be that it hits the right combination of features where existing solutions are not overlapping. For instance, if you wanted schemaless flexibility, you had to give up ACID transactions. (This was the case for a long time with databases.)

    An example from another domain is if you want to have typescript validation in your template rendering you need to be using TSX and React. If you go with options like Svelte or Vue, you can have this — partially but not completely — through the template rendering. So a solution that gave you the tiny footprint and speed of Svelte with the template level type checking of React and TypeScript could be enough for adoption even if it were missing another feature. The balance of and wants and needs is going to change from project to project. It is up to you to figure out where the value is going to be and decide how to tick the most important points in your analysis.

    We can now take a look at a solution and see how it evaluates against our desired solution. Fauna is a serverless database solution that boasts an on-demand scale with global distribution. It is a schemaless database, that provides ACID-compliant transactions, and supports relational queries and normalized data as a feature. Fauna can be used in both serverless applications as well as more traditional backends and provides libraries to work with the most popular languages. Fauna additionally provides workflows for authentication as well as easy and efficient multi-tenancy. These are both solid additional features to note because they could be the swaying factors when two technologies are nose to nose in our evaluation.

    Now after looking at all of these strengths we have to evaluate the weaknesses. One of which is Fauna is not open source. This does mean that there are risks of vendor lock-in, or business and pricing changes that are out of your control. Open source can be nice because you can often up and take the technology to another vendor if you please or potentially contribute back to the project.

    In the agency world, vendor lock-in is something we have to watch closely, not so much because of the price, but the viability of the underlying business is important. Having to change databases on a project that is in the middle of development or a few years old are both disastrous for an agency. Often a client will have to foot the bill for this, which is not a pleasant conversation to have.

    One other weakness we were concerned with is the focus on JAMstack. While we love JAMstack, we find ourselves building a wide variety of traditional web applications more often. We want to be sure that Fauna continues to support those use cases. We had a bad experience in the past with a hosting provider that went all-in on JAMstack and we ended up having to migrate a rather large swath of sites from the service, so we want to feel confident that all use cases will continue to see solid support. Right now, this seems to be the case, and the serverless workflows provided by Fauna actually can complement a more traditional application quite nicely.

    At this point, we’ve done our functional research and the only way to know if this solution is viable is to get down and write some code. In an agency environment, we can’t just take weeks out of the schedule for people to evaluate multiple solutions. This is the nature of working in an agency vs. a SaaS environment. In the latter, you might build a few prototypes to try to get to the right solution. In an agency, you will get a few days to experiment, or maybe the opportunity to do a side project but by and large we really have to narrow this down to one or two technologies at this stage and then put the fingers to the keyboard.

    The Developer Experience

    Judging the experience side of a new technology is perhaps the most difficult of the three areas since it is by nature subjective. It will also have variability from team to team. For example, if you asked a Ruby programmer, a Python programmer, and a Rust programmer about their opinions on different language features, you will get quite an array of responses. So, before you begin to judge an experience, you must first decide what characteristics are most important to your team overall.

    A comic in black and white with two stickmen, one as the Python programmer asking a JavaScript developer why there are so many semi-colons in JavaScript
    A Python programmer sees JavaScript for the first time. (Large preview)

    For agencies I think there are two major bottlenecks that come up with regard to developer experience:

    1. Setup time and configuration
    2. Learnability

    Both of these affect the long-term viability of a new technology in different ways. Keeping transient teams of developers in sync at an agency can be a headache. Tools that have lots of upfront setup costs and configurations are notoriously difficult for agencies to work with. The other is learnability and how easy it is for developers to grow the new technology. We’ll go into these in more detail and why they are my base when starting to evaluate developer experience.

    Setup Time And Configuration

    Agencies tend to have little patience and time for configuration. For me, I love sharp tools, with ergonomic designs, that allow me to get to work on the business problem at hand quickly. A number of years ago I worked for a SaaS company that had a complex local setup that involved many configurations and often failed at random points in the setup process. Once you were set up, the conventional wisdom was not to touch anything, and hope that you weren’t at the company long enough to have to set it up again on another machine. I’ve met developers that greatly enjoyed configuring each little piece of their emacs setup and thought nothing of losing a few hours to a broken local environment.

    In general, I have found agency engineers have a disdain for these types of things in their day-to-day work. While at home they may tinker with these types of tools, but when on a deadline there’s nothing like tools that just work. At agencies, we typically would prefer to learn a few new things that work well, consistently, rather than to be able to configure each piece of tech to each individual’s personal taste.

    Enjoyment and configuration time or effort join together at a point where test repos are left stranded
    There is an inflection point when it comes to configuration at which point our enjoyment using a framework drops off precipitously. Technologies that hit this point are rarely adopted in agencies without an extremely powerful featureset. (Large preview)

    One thing that is good about working with a cloud platform that is not open source is they own the setup and configuration entirely. While a downside of this is vendor lock-in, the upside is that these types of tools often do the thing they are set up to do well. There is no tinkering with environments, no local setups, and no deployment pipelines. We also have fewer decisions to make.

    This is inherently the appeal of serverless. Serverless in general has a greater reliance on proprietary services and tools. We trade the flexibility of hosting and source code so that we can gain greater stability and focus on the problems of the business domain we are trying to solve. I’ll also note that when I’m evaluating a technology and I get the feeling that migrating off of a platform might be needed, this is often a bad sign at the outset.

    In the case of databases, the set-it-and-forget-it setup is ideal when working with clients where the database needs can be ambiguous. We’ve had clients who were unsure how popular a program or application would be. We’ve had clients that we technically were not contracted to support in this way but nonetheless called us in a panic when they needed us to scale their database or application.

    In the past, we’d always have to factor in things like redundancy, data replication, and sharding to scale when we crafted our SOW’s. Trying to cover each scenario while also being prepared to move a full book of business around in the event a database wasn’t scaling is an impossible situation to prepare for. In the end, a serverless database makes these things easier.

    You never lose data, you don’t have to worry about replicating data across a network, nor provisioning a larger database and machine to run it on – it all just works. We only focus on the business problem at hand, the technical architecture and scale will always be managed. For our development team, this is a huge win; we have less fire drills, monitoring, and context switching.

    Learnability

    There is a classic user experience measure, which I think is applicable to developer experience, which is learnability. When designing for a certain user experience we don’t just look at if something is apparent or easy on first try. Technology just has more complexity than that most of the time. What is important is how easily a new user can learn and master the system.

    When it comes to technical tools, especially powerful ones, it would be a lot to ask for there to be zero learning curve. Usually what we look for is for there to be great documentation for the most common use cases and for that knowledge to be easily and quickly built upon when in a project. Losing a little time to learning on the first project with a technology is okay. After that, we should see efficiency improve with each successive project.

    What I look for specifically here is how we can leverage knowledge and patterns we already know to help shorten the learning curve. For instance, with serverless databases, there is going to be virtually zero learning curve for getting them set up in the cloud and deployed. When it comes to using the database one of the things I like is when we can still leverage all the years of mastering relational databases and apply those learnings to our new setup. In this case, we are learning how to use a new tool but it’s not forcing us to rethink our data modeling from the ground up.

    A stickman comic in black and white with one standing in front of a group of four each sitting at a desk saying that the secret to their product is unlearning everything they already know
    The solution that is really amazing if only you could just forget everything you’ve ever learned about in the past. (Large preview)

    As an example of this, when using Firebase, MongoDB, and DynamoDB we found that it encouraged denormalized data rather than trying to join different documents. This created a lot of cognitive friction when modeling our data as we needed to think in terms of access patterns rather than business entities. On the other side of this Fauna allowed us to leverage our years of relational knowledge as well as our preference for normalized data when it came to modeling data.

    The part we had to get used to was using indexes and a new query language to bring those pieces together. In general, I’ve found that preserving concepts that are a part of larger software design paradigms makes it easier on the development team in terms of learnability and adoption.

    How do we know that a team is adopting and loving a new technology? I think the best sign is when we find ourselves asking whether that tool integrates with the said new technology? When a new technology gets to a level of desirability and enjoyment that the team is searching for ways to incorporate it into more projects, that is a good sign you have a winner.

    The Business

    In this section, we have to look at how a new technology meets our business needs. These include questions like:

    • How easily can it be priced and integrated into our support plans?
    • Can we transition it to clients easily?
    • Can clients be onboarded to this tool if need be?
    • How much time does this tool actually save if any?

    The rise of serverless as a paradigm fits agencies well. When we talk about databases and DevOps, the need for specialists in these areas at agencies is limited. Often we are handing off a project when we are done with it or supporting it in a limited capacity long term. We tend to bias toward full-stack engineers as these needs outnumber DevOps needs by a large margin. If we hired a DevOps engineer they would likely be spending a few hours deploying a project and many more hours hanging out waiting for a fire.

    In this regard, we always have some DevOps contractors on the ready, but do not staff for these positions full time. This means we cannot rely on a DevOps engineer to be ready to jump for an unexpected issue. For us we know we can get better rates on hosting by going to AWS directly, but we also know that by using Heroku we can rely on our existing staff to debug most issues. Unless we have a client we need to support long term with specific backend needs, we like to default to managed platforms as a service.

    Databases are no exception. We love leaning on services like Mongo Atlas or Heroku Postgres to make this process as easy as possible. As we started to see more and more of our stack head into serverless tools like Vercel, Netlify, or AWS Lambda – our database needs had to evolve with that. Serverless databases like Firebase, DynamoDB, and Fauna are great because they integrate well with serverless apps but also free our business completely from provisioning and scaling.

    These solutions also work well for more traditional applications, where we don’t have a serverless application but we can still leverage serverless efficiencies at the database level. As a business, it is more productive for us to learn a single database that can apply to both worlds than to context switch. This is similar to our decision to adopt Node and isomorphic JavaScript (and TypeScript).

    One of the downsides we have found with serverless has been coming up with pricing for clients we manage these services for. In a more traditional architecture, flat rate tiers make it very easy to translate those into a rate for clients with predictable circumstances for incurring increases and overages. When it comes to serverless this can be ambiguous. Finance people don’t typically like hearing things like we charge 1/10th of a penny for every read beyond 1 million, and so on and so forth.

    This is hard to translate into a fixed number even for engineers as we are often building applications that we are not certain what the usage will be. We often have to create tiers ourselves but the many variables that go into the cost calculation of a lambda can be hard to wrap your head around. Ultimately, for a SaaS product these pay-as-you-go pricing models are great but for agencies the accountants like more concrete and predictable numbers.

    A stickman comic sitting at a desk with a thinking bubble saying that he asked for a number and not a formula
    When an accountant tries to figure out how much a serverless infrastructure will cost, they typically want a dollar amount, not an esoteric formula. (Large preview)

    When it came to Fauna, this was definitely more ambiguous to figure out than say a standard MySQL database that had flat-rate hosting for a set amount of space. The upside was that Fauna provides a nice calculator that we were able to use to put together our own pricing schemes.

    A screengrab of Fauna’s pricing calculator found on their site
    Fauna’s pricing calculator, a useful tool for helping craft pricing structures for clients transparently. (Large preview)

    Another difficult aspect of serverless can be that many of these providers do not allow for easy breakdown of each application being hosted. For instance, the Heroku platform makes this quite easy by creating new pipelines and teams. We can even enter a client’s credit card for them in case they don’t want to use our hosting plans. This can all be done within the same dashboard as well so we didn’t need to create multiple logins.

    When it came to other serverless tools this was much more difficult. In evaluating serverless databases Firebase supports splitting payments by project. In the case of Fauna or DynamoDB, this is not possible so we do have to do some work to monitor usage in their dashboard, and if the client wants to leave our service, we would have to transfer the database over to their own account.

    Ultimately, serverless tools provide great business opportunities in terms of cost savings, management, and process efficiency. However, often they do prove challenging for agencies when it comes to pricing and account management. This is one area where we have had to leverage cost calculators to create our own predictable pricing tiers or set clients up with their own accounts so they can make the payments directly.

    Conclusion

    It can be a difficult task to adopt a new technology as an agency. While we are in a unique position to work with new, greenfield projects that have opportunities for new technologies, we also have to consider the long-term investment of these. How will they perform? Will our people be productive and enjoy using them? Can we incorporate them into our business offering?

    You need to have a firm grasp of where you have been before you figure out where you want to go technologically. When evaluating a new tool or platform it’s important to think of what you have tried in the past and figure out what is most important to you and your team. We took a look at the concept of a serverless database and passed it through our three lenses – the technology, the experience, and the business. We were left with some pros and cons and had to strike the right balance.

    After we evaluated serverless databases, we decided to adopt Fauna over the alternatives. We felt the technology was robust and ticked all of our boxes for our technology filter. When it came to the experience, virtually zero configuration and being able to leverage our existing knowledge of relational data modeling made this a winner with the development team. On the business side serverless provides clear wins to efficiency and productivity, however on the pricing side and account management there are still some difficulties. We decided the benefits in the other areas outweighed the pricing difficulties.

    Overall, we highly recommend giving Fauna a shot on one of your next projects. It has become one of our favorite tools and our go-to database of choice for smaller serverless projects and even more traditional large backend applications. The community is very helpful, the learning curve is gentle, and we believe you’ll find levels of productivity you hadn’t realized before with existing databases.

    When we first use a new technology on a project, we start with something either internal or on the smaller side. We try to mitigate the risk by wading into the water rather than leaping into the deep end by trying it on a large and complex project. As the team builds understanding of the technology, we start using it for larger projects but only after we feel comfortable that it has handled similar use cases well for us in the past.

    In general, it can take up to a year for a technology to become a ubiquitous part of most projects so it is important to be patient. Agencies have a lot of flexibility but also are required to ensure stability in the products they produce, we don’t get a second chance. Always be experimenting and pushing your agency to adopt new technologies, but do so carefully and you will reap the benefits.

    Further Reading

    Smashing Editorial
    (vf, il)

    Source link

    web design

    Modeling A GraphQL API For Your Blog Using Webiny Serverless CMS — Smashing Magazine

    03/09/2021

    About The Author

    Nwani Victory works remotely as a Fullstack developer from Lagos, Nigeria. After office hours, he doubles as a Cloud Engineer seeking ways to make Cloud …
    More about
    Nwani

    In the world of serverless applications, Webiny is becoming a popular way to adopt the serverless approach of building applications by providing handy tools that developers can build their apps upon. In this article, we will look into what Webiny is and try out the headless CMS as a data source for a Gatsby blog application.

    In time past, developers reduced the challenges associated with managing content-dependent platforms through the use of Content Management Systems (CMS) which allowed web content to be created and displayed using existing design templates provided by the CMS service.

    But with the arrival of Single Page Applications (SPAs), this approach to managing content has become unfavorable as developers are locked-in with the provided design layouts. This is the point where the use of Headless CMS services has been largely embraced as developers have sought more freedom to serve content across various clients such as mobile, web, desktop, and even wearable devices.

    A headless CMS stores data in a backend database however unlike the traditional CMS service where content is displayed through a defined template, content is delivered via an API and this gives developers the flexibility to consume content across various clients or frontend frameworks.

    One example of such a headless CMS is Webiny. Its serverless headless CMS which provides a personalized admin application to create content, and a robust GraphQL API to consume whatever content was created through the admin application. Further down this article, we will explore Webiny and use the admin app when modeling content through the headless CMS app, then consume the content via the GraphQL API in a Gatsby blog application.

    If this is your first time hearing of Webiny, it’s an open-source framework for building serverless applications which provide users with tools and ready-made applications. It has a growing developer community on Slack, ultimately trying to make the development of serverless applications easy and straightforward.

    To make this article easy to follow, it has been broken down into two major segments. You can either skip to the part that interests you most, or follow them in the order as they appear below:

    Note: To follow along, you’ll need to have an AWS account (if not, please create one), Yarn, or have npm installed on your local machine. A good understanding of React.js is beneficial as the demo application is built by using Gatsby.

    Creating And Deploying A Webiny Project

    To get started, we’re going to create a new Webiny project, deploy it and use the Headless CMS through the generated admin app to begin modeling content within the GraphQL API.

    Running the command below from a terminal will generate a new Webiny project based on your answers to the installation prompts:

    npx create-webiny-project@beta webiny-blog-backend --tag beta
    

    The command above would run all steps needed for bootstrapping a Webiny project. A Webiny project consists of three smaller applications: a GraphQL API, an admin app, and also a website — all of which are contained in the root generated Webiny project folder similar to the one in the image below.

    Generated Webiny project directory structure.
    Generated Webiny project directory structure. (Large preview)

    Next, we need to start the deployment of the three components within the Webiny project to AWS so we can access the GraphQL API. The Cloud Infrastructure section of the Webiny documentation gives a detailed explanation of entire the infrastructure deployed to AWS.

    Run the command below from your terminal to begin this deployment which would last for few minutes:

    yarn webiny deploy

    After a successful deployment of all three apps, the URL to the Admin App, GraphQL API endpoint and website would be printed out in the terminal. You can save them in an editor for later use.

    Note: The command above deploys the three generated applications collectively. Please visit this part of the Webiny documentation on instructions on how to deploy the applications individually.

    Next, we will be setting up the Headless CMS using the admin application generated for managing your Webiny project.

    Webiny Admin App

    As part of the first time installation process when you access your admin app, you would be prompted to create a default user with your details, and a password to secure your admin app, after which you proceed through the installation prompts for the Headless CMS, Page Builder and Form Builder.

    Welcome page of the Admin App showing other Webiny Apps.
    Welcome page of the Admin App showing other Webiny Apps. (Large preview)

    From the Admin welcome page shown above, navigate to the Content Models page by clicking on the New Content Model button within the Headless CMS card. Being a new project, the Content Models list would be empty, we move on next to create our first Content Model.

    For our use-case, each content model would represent a blog post, this means each time we want to create a blog post we would create a content model and the data would be saved into the database and added to GraphQL API.

    Clicking the lemon floating action button would display the modal with the fields for creating a new Content Model as shown in the image below.

    Displayed create content modal with the needed fields for creating a new content.
    Displayed create content modal with the needed fields for creating a new content. (Large preview)

    After creating the content model from the image above, we can open the newly saved content model to begin adding fields containing data about the blog post into the content model.

    The Webiny content model page has an easy-to-use drag ‘n’ drop editor which supports dragging fields from the left side and dropping them into the editor on the right side of the page. These fields are of eight categories, each used to hold a specific type of value.

    Webiny drag and drop content editor.
    Webiny drag and drop content editor. (Large preview)

    Before we begin adding the fields for the content model, below is a layout of the items we want to be contained in the blog post.

    A flowchart containing items within a typical blog post.
    A flowchart containing items within a typical blog post. (Large preview)

    Note: While we do not have to insert the elements in the exact order above, however adding fields is much easier when we have a mental picture of the content model structure.

    Add the following items with their appropriate fields into the content editor to create the model structure above.

    1. Article Title Item

    Starting with the first item in the Article Title, we drag ‘n’ drop the TEXT field into the editor. The TEXT field is appropriate for a title as it was created for short texts or single-line values.

    Add the Label, Helper Text and Placeholder Text input values into the Field settings modal as shown below.

    Field Settings Modal used for adding the values of a dropped field type.
    Field Settings Modal used for adding the values of a dropped field type. (Large preview)

    2. Date Item

    Next for the Date, we drag ‘n’ drop the DATE field into the editor. DATE fields have an extra date format with options of either date only, time only, date time with timezone, or date time without a given timezone. For our use-case, we will select the date time alongside the timezone option as we want readers to see when the post was created in their current timezone.

    3. Article Summary

    For the Article summary item, we would drag the LONG TEXT field into the editor and fill in the Label, Helper Text and Placeholder Text inputs in the field settings. The LONG TEXT field is used to store multi-line text values and this makes it ideal as the article summary would have several lines summarizing the blog post.

    We would use the LONG TEXT field to create the First Paragraph and Concluding Paragraph items since they all contain a lengthy amount of text values.

    4. Sample Image

    The FILES field is used for adding files and object data into the content model. For our use-case, we would add images into the content model using the FILES field. Drag ‘n’ Drop the FILES field into the editor for adding images.

    After adding all the fields above, click the Preview tab to show the fields input elements added into the content model then fill in the values of these input fields.

    Preview showing all fields dropped in the content model editor.
    Preview showing all fields dropped in the content model editor. (Large preview)

    From the Preview Tab above, we can see a preview of all model fields dropped into the drag ‘n’ editor for creating a blog post using the content model. Add the respective values into each of the input fields above then click on the Save button at the bottom.

    After saving, we can view these input values by querying the GraphQL API using the GraphQL playground. Navigate to the API Information page using the sidebar, to access the GraphQL playground for your project.

    Using GraphQL editor, you can inspect the entire GraphQL API structure using the schema introspection feature from the Docs.

    We can also create and test GraphQL queries and mutations on our content models using the GraphQL Playground before using them from a client-side application.

    GraphQL playground for testing the generated Headless CMS GraphQL API.
    GraphQL playground for testing the generated Headless CMS GraphQL API. (Large preview)

    Within the image above we used the getContentModel query from our generated GraphQL API to query our Webiny database for data about the last content model we created. To get this exact model we had to pass in the modelID of the new model as an argument into the getContentModel query.

    At this point, we have set up our Webiny project and modeled our GraphQL API using the generated Webiny Admin application. We are now left with consuming the GraphQL API from a frontend application as a source of data. The following steps below describe how to consume your GraphQL API within a Gatsby Application.

    Generate An API Access Key

    All requests made to your Webiny GraphQL API endpoint must contain a valid token within its request headers for authentication. This token is obtained when you generate an API Key.

    From the side menu, click the API Keys item within the Security dropdown to navigate to the API Keys page where you create and manage your API Keys for your GraphQL API.

    Using the right placed form, we give the new key a name and a description, then we select the All locales radio button option within the Content dropdown. Lastly, within the Headless CMS dropdown, we select the Full Access option from the Access Level dropdown to give this key full access to data within the Headless CMS app of our Admin project.

    Note: *When granting app access permission to your API keys, Webiny provides a Custom Access option within the* Access Level dropdown to streamline what the API key can be used for within the selected application.*

    After saving the new API Key, a token key would be generated to be used when accessing the API Key. From the image below you can see an example of a token generated for my application within the highlighted box.

    (Large preview)

    Take note of this token key as we would use it next from our Gatsby Web Application.

    Setting A Gatsby Single Page Application

    Execute the command below to start the installer for creating a new Gatsby project on your local machine using NPM and select your project preference from the installation prompts.

    npm init gatsby

    Next, run this command to install the following needed dependencies into your Gatsby project;

    yarn add gatsby-source-graphql styled-components react-icons moment

    To use GraphQL within our Gatsby project, open the gatsby-config.js and modify it to have the same content with the codes in the code block below;

    // gatsby-config.js
    
    module.exports = {
        siteMetadata: {
            title: "My Blog Powered by Webiny CMS",
        },
        plugins: [
            "gatsby-plugin-styled-components",
            "gatsby-plugin-react-helmet",
            `gatsby-plugin-styled-components`,
            {
                resolve: `gatsby-source-filesystem`,
                options: {
                    name: `images`,
                    path: `${__dirname}/src/images`,
                },
            },
            {
                resolve: "gatsby-source-graphql",
                options: {
                    // Arbitrary name for the remote schema Query type
                    typeName: "blogs",
                    // Field for remote schema. You'll use this in your Gatsby query
                    fieldName: "posts",
                    url: process.env.GATSBY_APP_WEBINY_GRAPHQL_ENDPOINT,
                    headers : {
                        Authorization : process.env.GATSBY_APP_WEBINY_GRAPHQL_TOKEN
                    }
                },
            },
        ],
    };
    

    Above we are adding an external GraphQL API to Gatsby’s internal GraphQL API using the gatsby-source-graphql plugin. As an extra option, we added the GraphQL endpoint URL and access token value into the request headers from our Gatsby environment variables.

    Note: Run the yarn Webiny info command from a terminal launched within the Webiny project to print out the GraphQL API endpoint used in the url field of the gatsby-config.js file above.

    When next we start the Gatsby application, our GraphQL schema and data would be merged into Gatsby’s default generated schema which we can introspect using Gatsby’s GraphiQL Playground to see the fields similar to the those in the image below at http://localhost:8000/___graphql.

    GraphiQL playground generated by Gatsby for testing and introspecting the Gatsby generated schema.
    GraphiQL playground generated by Gatsby for testing and introspecting the Gatsby generated schema. (Large preview)

    Above we tested the Webiny remote schema with a test query alongside exploring the remote schema to see what fields are available within our Gatsby application.

    Note: A new test content model was later created to demonstrate multiple content models being returned from the listContentModels query.

    To query and display this data within the Gatsby application, create a new file ( posts.js ) containing the following React component:

    import React from "react"
    import {FiCalendar} from "react-icons/fi"
    import {graphql, useStaticQuery, Link} from "gatsby";
    import Moment from "moment"
    
    import {PostsContainer, Post, Title, Text, Button, Hover, HoverIcon} from "../styles"
    import Header from "../components/header"
    import Footer from "../components/footer"
    
    const Posts = () => {
        const data = useStaticQuery(graphql`
            query fetchAllModels {
                posts {
                    listContentModels {
                        data {
                            name
                            description
                            createdOn
                            modelId
                        }
                    }
                }
            }`)
    
        return (
            <div>
                <Header title={"Home   ||   Blog"}/>
    
                <div style={{display: "flex", justifyContent: "center",}}>
    
                    <PostsContainer>
                        <div>
                            <Title align={"center"} bold> A collection of my ideas</Title>
                            <Text align={"center"} color={"grey"}> A small space to document my thoughts in form of blog posts and articles </Text>
                        </div>
                        <br/>
                        {
                            data.posts.listContentModels.data.map(({id, name, description, createdOn, modelId}) => (
                                <Post key={id}>
                                    <div style={{display: "flex"}}>
                                        <HoverIcon>
                                            <FiCalendar/>
                                        </HoverIcon>
    
                                        <div>
                                            <Text small
                                                  style={{marginTop: "2px"}}> {Moment(createdOn).format("dddd, m, yyyy")} </Text>
                                        </div>
                                    </div>
                                    <br/>
                                    <Title bold align={"center"}> {name} </Title>
                                    <br/>
                                    <Text align={"center"}> {description} </Text>
                                    <br/>
                                    <div style={{textAlign: "right"}}>
                                        <Link to={`/${modelId}`} state={{modelId}}>
                                            <Button onClick={_ => {
                                            }}> Continue Reading </Button>
                                        </Link>
                                    </div>
                                </Post>
                            ))
                        }
                        <br/>
    
                    </PostsContainer>
                </div>
    
                <Footer/>
    
            </div>
    
        )
    }
    
    export default Posts

    From the code block above, we are making a query using the useStaticQuery hook from Gatsby and we use the returned data to populate the posts within the component styled using styled-components.

    Gatsby blog application home page showing a list of content models from GraphQL API.
    Gatsby blog application home page showing a list of content models from GraphQL API. (Large preview)

    Taking a closer look at the Continue Reading button in the code block above, we can see it is wrapped with a link that points to a page’s name of the modelId currently being iterated over. This page would be created dynamically from a template each time the Gatsby application is started.

    To implement this creation of dynamic pages, create a new file (gatsby-node.js) with the following code.

    # gatsby-node.js
    const path = require("path")
    
    exports.createPages = async ({graphql, actions, reporter}) => {
        const {createPage} = actions
    
        const result = await graphql(`
       query getContent {
        posts {
        listContentModels {
          data {
            description
            createdOn
            modelId
            name
          }
        }
      }
    }`)
    
        // Template to create dynamic pages from.
        const blogPostTemplate = path.resolve(`src/pages/post.js`)
    
        result.data.posts.listContentModels.data.forEach(({description, modelId, createdOn, name}) => {
            createPage({
                path: modelId,
                component: blogPostTemplate,
                // data to pass into the dynamic template
                context: {
                    name, description, modelId, createdOn
                },
            })
        })
    }

    As an overview, the code block above adds a new task into our Gatsby Application to be performed immediately after the application is started. At a closer look, we can see the following operations being done while performing this task.

    First, we make a GraphQL query to fetch all models created on Webiny which returns an array with the contained fields, then we iterate over the result each time using the createPage API from Gatsby to create a new page dynamically using the component in ./pages/post.js as a template.

    Lastly, we passed in the data that we received from iterating over each object in the Query result into the component being used as a template.

    At this point, the template component is non-existent. Create a new file (post.js) with the code below to create the template.

    # ./pages/post.js
    
    import React from "react"
    import Moment from "moment"
    
    import Header from "../components/header"
    import Footer from "../components/footer"
    import {PostContainer, Text, Title} from "../styles";
    import Layout from "../components/layout";
    
    const Post = ({ pageContext }) => {
        const { name, description , createdOn} = pageContext
    
        return (
            <Layout>
                <Header title={name}/>
                <br/>
    
                <div style={{display: "flex", justifyContent: "center"}}>
                    <PostContainer>
                        <Title align={"center"}> {name} </Title>
                        <Text color={"grey"} align={"center"}>
                          Created On {Moment(createdOn).format("dddd, mm, yyyy")}
                        </Text>
                        <br/>
                        <Text> {description} </Text>
                    </PostContainer>
                </div>
    
                <br/>
    
                <Footer/>
            </Layout>
        )
    }
    
    export default Post

    Above we created a component that is used as a template to create other dynamic pages. This component receives a pageContext object each time it is used as a template, the fields within the object are further destructured and used to populate the data shown on the page, same as the example shown below.

    Webiny blog post
    (Large preview)

    Conclusion

    Within this article we have had a detailed look into what Webiny is, the serverless features it provides, and also how the Headless CMS can be used with a Static Website Generator such as Gatsby as a source of data.

    As explained earlier, there are more serverless services which Webiny provides apart from the Headless CMS, such as the No-code Form Builder for building interactive forms, Page Builder, and even a File Manager for use within your applications.

    If you are looking for a service to leverage when building your next serverless application, then you should give Webiny a try. You can join the Webiny community on Slack or contribute to the Open Source Webiny project on Github.

    References

    Smashing Editorial
    (vf, il, yk)

    Source link

    web design

    Building Serverless Frontend Applications Using Google Cloud Platform — Smashing Magazine

    11/06/2020

    About The Author

    Nwani Victory works as a Frontend Engineer at Liferithms.inc from Lagos, Nigeria. After office hours, he doubles as a Cloud Engineer seeking ways to make Cloud …
    More about
    Nwani

    The use of serverless applications by developers to handle the business logic of their applications in on the high increase, but how does the Google Cloud — a major service provider within the public cloud — allow developers to manage serverless applications? In this article, you will learn what serverless applications are, how they are used on the Google Cloud, and also scenarios in which they can be used in a front-end application.

    Recently, the development paradigm of applications has begun to shift from manually having to deploy, scale and update the resources used within an application to relying on third-party cloud service providers to do most of the management of these resources.

    As a developer or an organization that wants to build a market-fit application within the quickest time possible, your main focus might be on delivering your core application service to your users while you spend a smaller amount of time on configuring, deploying and stress testing your application. If this is your use case, handling the business logic of your application in a serverless manner might your best option. But how?

    This article is beneficial to front-end engineers who want to build certain functionalities within their application or back-end engineers who want to extract and handle a certain functionality from an existing back-end service using a serverless application deployed to the Google Cloud Platform.

    Note: To benefit from what will be covered here, you need to have experience working with React. No prior experience in serverless applications is required.

    Before we begin, let’s understand what serverless applications really are and how the serverless architecture can be used when building an application within the context of a frontend engineer.

    Serverless Applications

    Serverless applications are applications broken down into tiny reusable event-driven functions, hosted and managed by third-party cloud service providers within the public cloud on behalf of the application author. These are triggered by certain events and are executed on demand. Although the “less” suffix attached to the serverless word indicates the absence of a server, this is not 100% the case. These applications still run on servers and other hardware resources, but in this case, those resources are not provisioned by the developer but rather by a third-party cloud service provider. So they are server-less to the application author but still run on servers and are accessible over the public internet.

    An example use case of a serverless application would be sending emails to potential users who visit your landing page and subscribe to receiving product launch emails. At this stage, you probably don’t have a back-end service running and would not want to sacrifice the time and resources needed to create, deploy and manage one, all because you need to send emails. Here, you can write a single file that uses an email client and deploy to any cloud provider that supports serverless application and let them manage this application on your behalf while you connect this serverless application to your landing page.

    While there are a ton of reasons why you might consider leveraging serverless applications or Functions As A Service (FAAS) as they are called, for your frontend application, here are some very notable reasons that you should consider:

    • Application auto scaling
      Serverless applications are horizontally scaled and this “scaling out” is automatically done by the Cloud provider based on the amount of invocations, so the developer doesn’t have to manually add or remove resources when the application is under heavy load.
    • Cost Effectiveness
      Being event-driven, serverless applications run only when needed and this reflects on the charges as they are billed based on the number of time invoked.
    • Flexibility
      Serverless applications are built to be highly reusable and this means they are not bound to a single project or application. A particular functionality can be extracted into a serverless application, deployed and used across multiple projects or applications. Serverless applications can also be written in the preferred language of the application author, although some cloud providers only support a smaller amount of languages.

    When making use of serverless applications, every developer has a vast array of cloud providers within the public cloud to make use of. Within the context of this article we will focus on serverless applications on the Google Cloud Platform — how they are created, managed, deployed and how they also integrate with other products on the Google Cloud. To do this, we will add new functionalities to this existing React application while working through the process of:

    • Organizing application workflows using the Google Cloud.
    • Storing and retrieving users’ data on the cloud.
    • Creating and managing cron jobs on the Google Cloud.
    • Deploying Cloud Functions to the Google Cloud.

    Note: Serverless applications are not bound to React only, as long as your preferred front-end framework or library can make an HTTP request, it can use a serverless application.

    Google Cloud Functions

    The Google Cloud allows developers to create serverless applications using the Cloud Functions and runs them using the Functions Framework. As they are called, Cloud functions are reusable event-driven functions deployed to the Google Cloud to listen for specific trigger out of the six available event triggers and then perform the operation it was written to execute.

    Cloud functions which are short-lived, (with a default execution timeout of 60 seconds and a maximum of 9 minutes) can be written using JavaScript, Python, Golang and Java and executed using their runtime. In JavaScript, they can be executed using only using some available versions of the Node runtime and are written in the form of CommonJS modules using plain JavaScript as they are exported as the primary function to be run on the Google Cloud.

    An example of a cloud function is the one below which is an empty boilerplate for the function to handle a user’s data.

    // index.js
    
    exports.firestoreFunction = function (req, res) {
      return res.status(200).send({ data: `Hello ${req.query.name}` });
    }

    Above we have a module which exports a function. When executed, it receives the request and response arguments similar to a HTTP route.

    Note: A cloud function matches every HTTP protocol when a request is made. This is worth noting when expecting data in the request argument as the data attached when making a request to execute a cloud function would be present in the request body for POST requests while in the query body for GET requests.

    Cloud functions can be executed locally during development by installing the @google-cloud/functions-framework package within the same folder where the written function is placed or doing a global installation to use it for multiple functions by running npm i -g @google-cloud/functions-framework from your command line. Once installed, it should be added to the package.json script with the name of exported module similar to the one below:

    
    "scripts": {                                                                
         "start": "functions-framework --target=firestoreFunction --port=8000",       
      }

    Above we have a single command within our scripts in the package.json file which runs the functions-framework and also specifies the firestoreFunction as the target function to be run locally on port 8000.

    We can test this function’s endpoint by making a GET request to port 8000 on localhost using curl. Pasting the command below in a terminal will do that and return a response.

    curl http://localhost:8000?name="Smashing Magazine Author"

    The request above when executed makes a request with a GET HTTP method and responds with with a 200 status code and an object data containing the name added in the query.

    Deploying A Cloud Function

    Out of the available deployment methods,, one quick way to deploy a cloud function from a local machine is to use the cloud Sdk after installing it. Running the command below from the terminal after authenticating the gcloud sdk with your project on the Google Cloud, would deploy a locally created function to the Cloud Function service.

    gcloud functions deploy "demo-function" --runtime nodejs10 --trigger-http --entry-point=demo --timeout=60 --set-env-vars=[name="Developer"] --allow-unauthenticated

    Using the explained flags below, the command above deploys an HTTP triggered function to the google cloud with the name “demo-function”.

    • NAME
      This is the name given to a cloud function when deploying it and is required.
    • region
      This is the region where the cloud function is to be deployed to. By default, it is deployed to us-central1.
    • trigger-http
      This selects HTTP as the function’s trigger type.
    • allow-unauthenticated
      This allows the function to be invoked outside the Google Cloud through the Internet using its generated endpoint without checking if the caller is authenticated.
    • source
      Local path from the terminal to the file which contains the function to be deployed.
    • entry-point
      This the specific exported module to be deployed from the file where the functions were written.
    • runtime
      This is the language runtime to be used for the function among this list of accepted runtime.
    • timeout
      This is the maximum time a function can run before timing out. It is 60 seconds by default and can be set to a maximum of 9 minutes.

    Note: Making a function allow unauthenticated requests means that anybody with your function’s endpoint can also make requests without you granting it. To mitigate this, we can make sure the endpoint stays private by using it through environment variables, or by requesting authorization headers on each request.

    Now that our demo-function has been deployed and we have the endpoint, we can test this function as if it was being used in a real-world application using a global installation of autocannon. Running autocannon -d=5 -c=300 CLOUD_FUNCTION_URL from the opened terminal would generate 300 concurrent requests to the cloud function within a 5 seconds duration. This more than enough to start the cloud function and also generate some metrics that we can explore on the function’s dashboard.

    Note: A function’s endpoint would be printed out in the terminal after deployment. If not the case, run gcloud function describe FUNCTION_NAME from the terminal to get the details about the deployed function including the endpoint.

    Using the metrics tab on the dashboard, we can see a visual representation from the last request consisting of how many invocations were made, how long they lasted, the memory footprint of the function and how many instances were spun to handle the requests made.

    A function’s dashboard showing a chart of gathered metrics from all recent requests made.
    Cloud function dashboard showing all requests made. (Large preview)

    A closer look at the Active Instances chart within the image above shows the horizontal scaling capacity of the Cloud Functions, as we can see that 209 instances were spun up within a few seconds to handle the requests made using autocannon.

    Cloud Function Logs

    Every function deployed to the Google cloud has a log and each time this function is executed, a new entry into that log is made. From the Log tab on the function’s dashboard, we can see a list of all the logs entries from a cloud function.

    Below are the log entries from our deployed demo-function created as a result of the requests we made using autocannon.

    The cloud function log showing the logs from the function’s execution times.
    Cloud function log tab showing all execution logs. (Large preview)

    Each of the log entry above shows exactly when a function was executed, how long the execution took and what status code it ended with. If there are any errors resulting from a function, details of the error including the line it occurred would be shown in the logs here.

    The Logs Explorer on the Google Cloud can be used to see more comprehensive details about the logs from a cloud function.

    Cloud Functions With Front-end Applications

    Cloud functions are very useful and powerful to frontend engineers. A frontend engineer without the knowledge of managing back-end applications can extract a functionality into a cloud function, deploy to the Google Cloud and use in a frontend application by making HTTP requests to the cloud function through it’s endpoint.

    To show how cloud functions can be used in a frontend application, we would add more features to this React application. The application already has a basic routing between the authentication and home pages setup. We will expand it to use the React Context API to manage our application state as the use of the created cloud functions would be done within the application reducers.

    To get started, we create our application’s context using the createContext API and also create a reducer for handling the actions within our application.

    // state/index.js
    import { createContext } from “react”;
    
    

    export const UserReducer = (action, state) => { switch (action.type) { case “CREATE-USER”: break; case “UPLOAD-USER-IMAGE”: break; case “FETCH-DATA” : break case “LOGOUT” : break; default: console.log(${action.type} is not recognized) } };

    export const userState = { user: null, isLoggedIn : false };

    export const UserContext = createContext(userState);

    Above, we started with creating a UserReducer function which contains a switch statement, allowing it perform an operation based on the type of action dispatched into it. The switch statement has has four cases and these are the actions we will be handling. For now they don’t do anything yet but when we begin integrating with our cloud functions, we would incrementally implement the actions to be performed in them.

    We also created and exported our application’s context using the React createContext API and gave it a default value of the userState object which contains a user value currently which would be updated from null to the user’s data after authentication and also an isLoggedIn boolean value to know if the user is logged in or not.

    Now we can proceed to consume our context, but before we do that, we need to wrap our entire application tree with the Provider attached to the UserContext for the children components to be able to subscribe to the value change of our context.

    // index.js 
    import React from "react";
    import ReactDOM from "react-dom";
    import "./index.css";
    import App from "./app";
    import { UserContext, userState } from "./state/";
    
    ReactDOM.render(
      <React.StrictMode>
        <UserContext.Provider value={userState}>
          <App />
        </UserContext.Provider>
      </React.StrictMode>,
      document.getElementById("root")
    );
    
    serviceWorker.unregister();
    

    We wrap our enter application with the UserContext provider at the root component and passed our previously created userState default value in the value prop.

    Now that we have our application state fully setup, we can move into our creating the user’s data model using the Google Cloud Firestore through a cloud function.

    Handling Application Data

    A user’s data within this application consists of a unique id, an email, a password and the URL to an image. Using a cloud function this data will be stored on the cloud using the Cloud Firestore Service which is offered on the Google Cloud Platform.

    The Google Cloud Firestore, a flexible NoSQL database was carved out from the Firebase Realtime Database with new enhanced features that allows for richer and faster queries alongside offline data support. Data within the Firestore service are organized into collections.

    The Firestore can be visually accessed through the Google Cloud Console. To launch it, open the left navigation pane and scroll down to the Database section and click on Firestore. That would show the list of collections for users with existing data or prompt the user to create a new collection when there is no existing collection. We would create a users collection to be used by our application.

    Similar to other services on the Google Cloud Platform, Cloud Firestore also has a JavaScript client library built to be used in a node environment (an error would be thrown if used in the browser). To improvise, we use the Cloud Firestore in a cloud function using the @google-cloud/firestore package.

    Using The Cloud Firestore With A Cloud Function

    To get started, we would rename the first function we created from demo-cloud-function to firestoreFunction and then expand it to connect with our users collection on the Firestore and also save and login users.

    require("dotenv").config();
    const { Firestore } = require("@google-cloud/firestore");
    const { SecretManagerServiceClient } = require("@google-cloud/secret-manager");
    
    const client = new SecretManagerServiceClient();
            
    exports.firestoreFunction = function (req, res) {
        return {
            const { email, password, type } = req.body;
            const firestore = new Firestore();
            const document = firestore.collection("users");
            console.log(document) // prints details of the collection to the function logs
            if (!type) {
                res.status(422).send("An action type was not specified");
            }
    
            switch (type) {
                case "CREATE-USER":
                    break
                case "LOGIN-USER":
                    break;
                default:
                    res.status(422).send(`${type} is not a valid function action`)
            }
    };
    

    To handle more operations involving the fire-store, we have added a switch statement with two cases to handle the authentication needs of our application. Our switch statement evaluates a type expression which we add to the request body when making a request to this function from our application and whenever this type data is not present in our request body, the request is identified as a Bad Request and a 400 status code alongside a message to indicate the missing type is sent as a response.

    We establish a connection with the Firestore using the Application Default Credentials(ADC) library within the Cloud Firestore client library. On the next line, we call the collection method in another variable and pass in the name of our collection. We will be using this to further perform other operations on the collection of the contained documents.

    Note: Client libraries for services on the Google Cloud connect to their respective service using a created service account key passed in when initializing the constructor. When the service account key is not present, it defaults to using the Application Default Credentials which in turn connects using the IAM roles assigned to cloud function.

    After editing the source code of a function that was deployed locally using the gcloud sdk, we can re-run the previous command from a terminal to update and redeploy the cloud function.

    Now that a connection has been established, we can implement the CREATE-USER case to create a new user using data from the request body and then move on to the LOGIN-USER which finds an existing user and sends back a cookie.

    
    require("dotenv").config();
    const { Firestore } = require("@google-cloud/firestore");
    const path = require("path");
    const { v4 : uuid } = require("uuid")
    const cors = require("cors")({ origin: true });
    
    const client = new SecretManagerServiceClient();
    
    exports.firestoreFunction = function (req, res) {
        return cors(req, res, () => {
            const { email, password, type } = req.body;
            const firestore = new Firestore();
            const document = firestore.collection("users");
            if (!type) {
                res.status(422).send("An action type was not specified");
            }
    
            switch (type) {
                case "CREATE-USER":
                  if (!email || !password) {
                    res.status(422).send("email and password fields missing");
                  }
                
                const id = uuid()
                return bcrypt.genSalt(10, (err, salt) => {
                  bcrypt.hash(password, salt, (err, hash) => {
                    document.doc(id)
                      .set({
                        id : id
                        email: email,
                        password: hash,
                        img_uri : null
                       })
                      .then((response) => res.status(200).send(response))
                      .catch((e) =>
                          res.status(501).send({ error : e })
                        );
                      });
                    });               
    
               case "LOGIN":
                  break;
              default:
                res.status(400).send(`${type} is not a valid function action`)
            }
        });
    };
    

    We generated a UUID using the uuid package to be used as the ID of the document about to be saved by passing it into the set method on the document and also the user’s id. By default, a random ID is generated on every inserted document but in this case, we would update the document when handling the image upload and the UUID is what would be used to get a particular document to be updated. Rather than store the user’s password in plain text, we salt it first using bcryptjs then store the result hash as the user’s password.

    Integrating the firestoreFunction cloud function into the app, we use it from the CREATE_USER case within the user reducer.

    After clicking the Create Account button, an action is dispatched to the reducers with a CREATE_USER type to make a POST request containing the typed email and password to the firestoreFunction function’s endpoint.

    import { createContext } from "react";
    import { navigate } from "@reach/router";
    import Axios from "axios";
    
    export const userState = {
      user : null, 
      isLoggedIn: false,
    };
    
    export const UserReducer = (state, action) => {
      switch (action.type) {
        case "CREATE_USER":
          const FIRESTORE_FUNCTION = process.env.REACT_APP_FIRESTORE_FUNCTION;
          const { userEmail, userPassword } = action;
    
          const data = {
            type: "CREATE-USER",
            email: userEmail,
            password: userPassword,
          };
    
          Axios.post(`${FIRESTORE_FUNCTION}`, data)
            .then((res) => {
              navigate("/home");
              return { ...state, isLoggedIn: true };
            })
            .catch((e) => console.log(`couldnt create user. error : ${e}`));
          break;
        case "LOGIN-USER":
          break;
        case "UPLOAD-USER-IMAGE":
          break;
        case "FETCH-DATA" :
          break
        case "LOGOUT":
          navigate("/login");
          return { ...state, isLoggedIn: false };
        default:
          break;
      }
    };
    
    export const UserContext = createContext(userState);
    

    Above, we made use of Axios to make the request to the firestoreFunction and after this request has been resolved we set the user initial state from null to the data returned from the request and lastly we route the user to the home page as a logged in user.

    Next, we move on to implement a login functionality within our firestoreFunction function to enable an existing user log in to their account using their saved credentials.

    
    require("dotenv").config();
    const { Firestore } = require("@google-cloud/firestore");
    const path = require("path");
    const cors = require("cors")({ origin: true });
    
    const client = new SecretManagerServiceClient()                                                                         
    exports.firestoreFunction = function (req, res) {
        return cors(req, res, () => {
            const { email, password, type } = req.body;
            const firestore = new Firestore();
            const document = firestore.collection("users");
            if (!type) {
                res.status(422).send("An action type was not specified");
            }
    
            switch (type) {
                case "CREATE":
                    // ... CREATE - USER LOGIC
                    break
                case "LOGIN":
                    break;
                default:
                    res.status(500).send({ error : `${type} is not a valid action` })
            }
        });
    };
    

    At this point, a new user can successfully create an account successfully and get routed to the home page. This process demonstrates how we use the Cloud Firestore to perform the basic saving and mutation of data in a serverless application.

    Handling File Storage

    The storing and retrieving of a user’s files in an application is most times a much-needed feature within an application. In an application connected to a node.js backend, Multer is often used as a middleware to handle the multipart/form-data which an uploaded file comes in. But in the absence of the node.js backend, we could use an online file storage service such as the Google Cloud Storage to store files.

    The Google Cloud Storage is a globally available file storage service used to store any amount of data as objects for applications into buckets. It is flexible enough to handle the storage of static assets for both small and large-sized applications.

    To use the Cloud Storage service within an application, we could make use of the available Storage API endpoints or by using the official node Storage client library. However, the Node Storage client library does not work within a Browser window so we could make use of a Cloud Function where we would use the library.

    An example of this, is the Cloud Function below which connects and uploads a file to a created Cloud Bucket.

    const cors = require("cors")({ origin: true });
    const { Storage } = require("@google-cloud/storage");
    const StorageClient = new Storage();
    
    exports.Uploader = (req, res) => {
        const { file } = req.body;
        StorageClient.bucket("TEST_BUCKET")
          .file(file.name)
          .then((response) => {
             console.log(response);
            res.status(200).send(response)
           })
          .catch((e) => res.status(422).send({error : e}));
      });
    };
    

    From the cloud function above, we are performing the two following main operations:

    • First, we create a connection to the Cloud Storage within the Storage constructor and it uses the Application Default Credentials (ADC) feature on the Google Cloud to authenticate with the Cloud Storage.

    • Second, we upload the file included in the request body to our TEST_BUCKET by calling the .file method and passing in the file’s name. Since this is an asynchronous operation, we use a promise to know when this action has been resolved and we send a 200 response back thus ending the life-cycle of the invocation.

    Now, we can expand the Uploader Cloud Function above to handle the upload of a user’s profile image. The cloud function will receive a user’s profile image, store it within our application’s cloud bucket, and then update the user’s img_uri data within our users’ collection in the Firestore service.

    require("dotenv").config();
    const { Firestore } = require("@google-cloud/firestore");
    const cors = require("cors")({ origin: true });
    const { Storage } = require("@google-cloud/storage");
    
    const StorageClient = new Storage();
    const BucketName = process.env.STORAGE_BUCKET
    
    exports.Uploader = (req, res) => {
      return Cors(req, res, () => {
        const { file , userId } = req.body;
        const firestore = new Firestore();
        const document = firestore.collection("users");
    
        StorageClient.bucket(BucketName)
          .file(file.name)
          .on("finish", () => {
            StorageClient.bucket(BucketName)
              .file(file.name)
              .makePublic()
              .then(() => {
                  const img_uri = `https://storage.googleapis.com/${Bucket}/${file.path}`;
                    document
                     .doc(userId)
                     .update({
                          img_uri,
                      })
                      .then((updateResult) => res.status(200).send(updateResult))
                      .catch((e) => res.status(500).send(e));
                      })
              .catch((e) => console.log(e));
          });
      });
    };

    Now we have expanded the Upload function above to perform the following extra operations:

    • First, it makes a new connection to the Firestore service to get our users collection by initializing the Firestore constructor and it uses the Application Default Credentials (ADC) to authenticate with the Cloud Storage.

    • After uploading the file added in the request body we make it public in order to be accessible via a public URL by calling the makePublic method on the uploaded file. According to the Cloud Storage’s default Access Control, without making a file public, a file cannot be accessed over the internet and to be able to do this when the application loads.

    Note: Making a file public means anyone using your application can copy the file link and have unrestricted access to the file. One way to prevent this is by using a Signed URL to grant temporary access to a file within your bucket instead of making it fully public.

    • Next, we update the user’s existing data to include the URL of the file uploaded. We find the particular user’s data using Firestore’s WHERE query and we use the userId included in the request body, then we set the img_uri field to contain the URL of the newly updated image.

    The Upload cloud function above can be used within any application having registered users within the Firestore service. All that is needed to make a POST request to the endpoint, putting the user’s IS and an image in the request body.

    An example of this within the application is the UPLOAD-FILE case which makes a POST request to the function and puts the image link returned from the request in the application state.

    # index.js
    import Axios from 'axios'
    
    const UPLOAD_FUNCTION = process.env.REACT_APP_UPLOAD_FUNCTION 
    
    export const UserReducer = (state, action) => {
    switch (action.type) {
     case "CREATE-USER" :
       # .....CREATE-USER-LOGIC .... 
    
     case "UPLOAD-FILE":
        const { file, id }  = action
        return Axios.post(UPLOAD_FUNCTION, { file, id }, {
         headers: {
             "Content-Type": "image/png",
          },
       })
      .then((response) => {})
      .catch((e) => console.log(e));
    
      default : 
        return console.log(`${action.type} case not recognized`)
      }
    }
    

    From the switch case above, we make a POST request using Axios to the UPLOAD_FUNCTION passing in the added file to be included in the request body and we also added an image Content-Type in the request header.

    After a successful upload, the response returned from the cloud function would contain the user’s data document which has been updated to contain a valid url of the image uploaded to the google cloud storage. We can then update the user’s state to contain the new data and this would also update the user’s profile image src element in the profile component.

    A user’s profile page which with an update profile image
    A user’s profile page which has just been updated to show the newly updated profile image. (Large preview)

    Handling Cron Jobs

    Repetitive automated tasks such as sending emails to users or performing an internal action at a specific time are most times an included feature of applications. In a regular node.js application, such tasks could be handled as cron jobs using node-cron or node-schedule. When building serverless applications using the Google Cloud Platform, the Cloud Scheduler is also designed to perform a cron operation.

    Note: Although the Cloud Scheduler works similar to the Unix cron utility in creating jobs that are executed in the future, it is important to note that the Cloud Scheduler does not execute a command as the cron utility does. Rather it performs an operation using a specified target.

    As the name implies, the Cloud Scheduler allows users to schedule an operation to be performed at a future time. Each operation is called a job and jobs can be visually created, updated, and even destroyed from the Scheduler section of the Cloud Console. Asides from a name and description field, jobs on the Cloud Scheduler consist of the following:

    • Frequency: This is used to schedule the execution of the Cron job. Schedules are specified using the unix-cron format which is originally used when creating background jobs on the cron table in a Linux environment. The unix-cron format consists of a string with five values each representing a time point. Below we can see each of the five strings and the values they represent.
       - - - - - - - - - - - - - - - -   minute ( - 59 )
      |   - -  - - - - - -  - - - -  -  hour ( 0 - 23 )
      |   |   - - - - - - -  - - - - -  day of month ( 1 - 31 )
      |   |   |    - -  - - - -  - - -  month ( 1 - 12 )
      |   |   |    |     - - -  - - --  day of week ( 0 - 6 )   
      |   |   |    |    |
      |   |   |    |    |
      |   |   |    |    |
      |   |   |    |    |
      |   |   |    |    |  
      *   *   *    *    * 

    The Crontab generator tool comes in handy when trying to generate a frequency-time value for a job. If you are finding it difficult to put the time values together, the Crontab generator has a visual drop-down where you can select the values that make up a schedule and you copy the generated value and use as the frequency.

    • Timezone: The timezone from where the cron job is executed. Due to the time difference between time-zones, cron jobs executed with different specified time-zones would have different execution times.

    • Target: This is what is used in the execution of the specified Job. A target could be an HTTP type where the job makes a request at the specified time to URL or a Pub/Sub topic which the job can publish messages to or pull messages from and lastly an App Engine Application.

    The Cloud Scheduler combines perfectly well with HTTP triggered Cloud Functions. When a job within the Cloud Scheduler is created with its target set to HTTP, this job can be used to executed a cloud function. All that needs to be done is to specify the endpoint of the cloud function, specify the HTTP verb of the request then add whatever data needs to be passed to function in the displayed body field. As shown in the sample below:

    Fields required for creating a cron job using the cloud console
    Fields required for creating a cron job using the cloud console. (Large preview)

    The cron job in the image above would run by 9 AM every day making a POST request to the sample endpoint of a cloud function.

    A more realistic use case of a cron job is sending scheduled emails to users at a given interval using an external mailing service such as Mailgun. To see this in action, we will create a new cloud function which sends a HTML email to a specified email address using the nodemailer JavaScript package to connect to Mailgun,

    # index.js
    require("dotenv").config();
    const nodemailer = require("nodemailer");
    
    exports.Emailer = (req, res) => {
      let sender = process.env.SENDER;
      const { reciever, type } = req.body
    
      var transport = nodemailer.createTransport({
        host: process.env.HOST,
        port: process.env.PORT,
        secure: false,
        auth: {
          user: process.env.SMTP_USERNAME,
          pass: process.env.SMTP_PASSWORD,
        },
      });
    
      if (!reciever) {
        res.status(400).send({ error: `Empty email address` });
      }
    
      transport.verify(function (error, success) {
        if (error) {
          res
            .status(401)
            .send({ error: `failed to connect with stmp. check credentials` });
        }
      });
    
      switch (type) {
        case "statistics":
          return transport.sendMail(
            {
              from: sender,
              to: reciever,
              subject: "Your usage satistics of demo app",
              html: { path: "./welcome.html" },
            },
            (error, info) => {
              if (error) {
                res.status(401).send({ error : error });
              }
              transport.close();
              res.status(200).send({data  : info});
            }
          );
    
        default:
          res.status(500).send({
            error: "An available email template type has not been matched.",
          });
      }
    };

    Using the cloud function above we can send an email to any user’s email address specified as the receiver value in the request body. It performs the sending of emails through the following steps :

    • It creates an SMTP transport for sending messages by passing the host, user and pass which stands for password, all displayed on the user’s Mailgun dashboard when a new account is created.
    • Next, it verifies if the SMTP transport has the credentials needed in order to establish a connection. If there’s an error in establishing the connection, it ends the function’s invocation and sends back a 401 unauthenticated status code.
    • Next, it calls the sendMail method to send the email containing the HTML file as the email’s body to the receiver’s email address specified in the to field.

    Note: We use a switch statement in the cloud function above to make it more reusable for sending several emails for different recipients. This way we can send different emails based on the type field included in the request body when calling this cloud function.

    Now that there is a function that can send an email to a user; we are left with creating the cron job to invoke this cloud function. This time the cron jobs would be created dynamically each time a new user is created using the official Google cloud client library for the Cloud Scheduler from the initial firestoreFunction.

    We expand the CREATE-USER case to create the job which sends the email to the created user at a one-day interval.

    
    require("dotenv").config();cloc
    const { Firestore } = require("@google-cloud/firestore");
    const scheduler = require("@google-cloud/scheduler") 
    const cors = require("cors")({ origin: true });
    
    const EMAILER = proccess.env.EMAILER_ENDPOINT
    const parent = ScheduleClient.locationPath(
     process.env.PROJECT_ID,
     process.env.LOCATION_ID
    );
    
    exports.firestoreFunction = function (req, res) {
        return cors(req, res, () => {
            const { email, password, type } = req.body;
            const firestore = new Firestore();
            const document = firestore.collection("users");
            const client = new Scheduler.CloudSchedulerClient()
    
            if (!type) {
                res.status(422).send({ error : "An action type was not specified"});
            }
    
            switch (type) {
              case "CREATE-USER":
    
            const job = {
              httpTarget: {
                uri: process.env.EMAIL_FUNCTION_ENDPOINT,
                httpMethod: "POST",
                body: {
                  email: email,
                },
              },
              schedule: "*/30 */6 */5 10 4",
              timezone: "Africa/Lagos",
              }
                  if (!email || !password) {
                       res.status(422).send("email and password fields missing");
                    }
                return bcrypt.genSalt(10, (err, salt) => {
                  bcrypt.hash(password, salt, (err, hash) => {
                    document
                      .add({
                        email: email,
                        password: hash,
                       })
                      .then((response) => {
                          client.createJob({
                              parent : parent,
                              job : job
                          }).then(() => res.status(200).send(response))
                          .catch(e => console.log(`unable to create job : ${e}`) )
                      })
                      .catch((e) =>
                          res.status(501).send(`error inserting data : ${e}`)
                        );
                      });
                    });               
                default:
                    res.status(422).send(`${type} is not a valid function action`)
            }
        });
    };
    

    From the snippet above, we can see the following:

    • A connection to the Cloud Scheduler from the Scheduler constructor using the Application Default Credentials (ADC) is made.
    • We create an object consisting of the following details which make up the cron job to be created:
      • uri
        The endpoint of our email cloud function which a request would be made to.
      • body
        This is the data containing the email address of the user to be included when the request is made.
      • schedule
        The unix cron format representing the time when this cron job is to be performed.
    • After the promise from inserting the user’s data document is resolved, we create the cron job by calling the createJob method and passing in the job object and the parent.
    • The function’s execution is ended with a 200 status code after the promise from the createJob operation has been resolved.

    After the job is created, we would see it listed on the scheduler page.

    List of all scheduled cron jobs including the last created job.
    List of all scheduled cron jobs including the last created job. (Large preview)

    From the image above we can see the time scheduled for this job to be executed. We can decide to manually run this job or wait for it to be executed at the scheduled time.

    Conclusion

    Within this article, we have had a good look into serverless applications and the benefits of using them. We also had an extensive look at how developers can manage their serverless applications on the Google Cloud using Cloud Functions so you now know how the Google Cloud is supporting the use of serverless applications.

    Within the next years to come, we will certainly see a large number of developers adapt to the use of serverless applications when building applications. If you are using cloud functions in a production environment, it is recommended that you read this article from a Google Cloud advocate on “6 Strategies For Scaling Your Serverless Applications”.

    The source code of the created cloud functions are available within this Github repository and also the used frontend application within this Github repository. The frontend application has been deployed using Netlify and can be tested live here.

    References

    Smashing Editorial
    (ks, ra, yk, il)

    Source link

    web design

    What Is Serverless? — Smashing Magazine

    08/11/2020

    We’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? Drew McLellan talks to Chris Coyier to find out.

    Today, we’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? I spoke to Chris Coyier to find out.

    Show Notes

    Weekly Update

    Transcript

    Photo of Chris CoyierDrew McLellan: He’s a web designer and developer who you may know from CSS-Tricks, a website he started more than 10 years ago and that remains a fantastic learning resource for those building websites. He’s the co-founder of CodePen, the browser based coding playground and community used by front-enders all around the world to share what they make and find inspiration from those they follow. Alongside Dave Rupert is the co-host of ShopTalk Show, a podcast all about making websites. So we know he knows a lot about web development, but did you know he once won a hot dog eating competition using only his charm? My smashing friends, please welcome Chris Coyier. Hello Chris, how are you?

    Chris Coyier: Hey, I’m smashing.

    Drew: I wanted to talk to you today not about CodePen, and I don’t necessarily want to talk to you about CSS-Tricks, which is one of those amazing resources that I’m sure everyone knows appears right at the top of Google Search results when looking for answers about any web dev question. Up pops your face and there’s a useful blog post written by you or one of your guest contributors.

    Chris: Oh, I used to actually do that. There was a… I don’t know, it probably was during the time of when Google had that weird social network. What was that? Google Plus?

    Drew: Oh, Plus, yeah.

    Chris: Yeah, where they would associate a website with a Plus account, and so my Plus account had an avatar, and the avatar was me, so it would show up in search results. I think those days are gone. I think if you…

    Drew: I think so, yeah-

    Chris: Yeah.

    Drew: But I kind of wanted to talk to you about something that has been a little bit more of a sort of side interest of yours, and that’s this concept of serverless architectures.

    Chris: Mm (affirmative).

    Drew: This is something you’ve been learning sort of more about for a little while. Is that right?

    Chris: Yeah, yeah. I’m just a fan. It seems like a natural fit to the evolution of front-end development, which is where I feel like I have, at least, some expertise. I consider myself much more of a… much more useful on the front-end than the back-end, not that I… I do it all these days. I’ve been around long enough that I’m not afraid of looking at a little Ruby code, that’s for sure. But I prefer the front-end. I’ve studied it more. I’ve participated in projects more at that level, and then along comes this little kind of a new paradigm that says, “You can use your JavaScript skills on the server,” and it’s interesting. You know? That’s how I think of it. There’s a lot more to it than that, but that’s why I care, is because I feel it’s like front-end developers have dug so deep into JavaScript. And now we can use that same skill set elsewhere. Mm, pretty cool.

    Drew: Seems like a whole new world has opened up, whereas if you were just a front-end coder… I say, just a front-end coder, I shouldn’t. If you’re a front-end coder, and you’re used to working with a colleague or a friend to help you with the back-end implementation, suddenly that’s opened up. And it’s something that you can manage more of the whole stack yourself.

    Chris: Yeah, yeah. That’s it.

    Drew: Addressing the elephant in the room, right at the top. We’re talking about serverless, and obviously, naming things is hard. We all know that. Serverless architecture doesn’t mean there are no servers, does it?

    Chris: I think it’s mandatory, like if this is the first podcast you’re hearing of it, or in the first… you’re only hearing the word “serverless” in the first dozen times you ever heard it, it’s mandatory that you have a visceral reaction and have this kind of, “Oh, but there are still servers.” That’s okay. If that’s happening to you right now, just know that, that’s a required step in this. It’s just like anything else in life. There’s stages to understanding. The first time you hear something, you’re required to kind of reject it a little bit, and then only after a dozen times or so, or after it’s proven its worth a little bit to you, do you get to enter the further stages of understanding here. But the word has won, so if you’re still fighting against the word “serverless”, I hate to tell you, that the train has left the station there. The word is already successful. You’re not going to win this one. So, sorry.

    Chris: But I do think it’s interesting that… it’s starting to be like, maybe there actually aren’t servers involved sometimes. I would think one of the things that locked serverless in as a concept was AWS Lambda. They were kind of the first on the scene. A lambda is like a function that you give to AWS and it puts it in the magical sky and then… it has a URL, and you can hit it and it will run that function and return something if you want it to. You know? That’s just HTTP or whatever. That’s how it works, which… the first time you hear that, you’re like, “Why? I don’t care.” But then, there’s some obvious things to it. It could know my API keys that nobody else has access to. That’s why you run back-end to begin with, is that it knows secret stuff that doesn’t have to be in the JavaScript on the client side. So if it needs to talk to a database, it can do that. It can do that securely without having to expose API keys elsewhere. Or even where that data is or how it gets it, it’s…

    Chris: So that’s pretty cool. I can write a function that talks to a database, get some data, returns that. Cool. So, Lambda is that, but AWS works. You have to pick a region. You’re like, “I don’t know. Where it should be, Virginia? Oregon? Should I pick the Australia one? I don’t know.” They have 20, 30. I don’t even know how many they have these days, but even lambdas had regions. They, I think, these days have Lambda@Edge, which means it’s all of the regions, which is kind of cool. But they were first, and now everybody’s got something like Lambda. All the cloud services. They want some kind of service in this world. One of them is CloudFlare. CloudFlare has workers. They have way more locations than AWS has, but they executed it kind of at a different time too… the way a CloudFlare worker… it’s similar to a lambda in that you can run Node. You can run JavaScript. You can run a number of other languages too, but… I think of this stuff largely, the most interesting language is JavaScript, just because of the prevalence of it.

    Chris: It happens just at the CDN level, which I guess is a server, but I tend to not think of CDNs as a server. Not as obviously as something else. It’s starting to feel even more serverless-y lately. Is a CDN a server? I mean, I guess it’s a computer somewhere, but it feels like even less server-y.

    Drew: It feels like, yes, a CDN may be a server, but it’s the most sort of minimal version of a server. It’s like a thin server, if you like.

    Chris: Yeah. Sure.

    Drew: All right. I’ve heard it said… I can’t remember the source to credit, unfortunately, but I’ve heard serverless described as being “like using a ride-sharing service like Uber or Lyft” or whatever. You can be carless and not own a car, but that doesn’t mean you never use a car.

    Chris: Yeah, it doesn’t mean cars don’t exist. Mm, that’s nice.

    Drew: You just summon one when you need it, but at the same time, you’re not paying the upfront purchase cost of a car. You’re not paying maintenance or fuel or-

    Chris: Right, and the pricing makes sense, too, right? That’s nice. That’s a nice analogy, I think. And then, because it’s at the CDN level too, it just intercepts HTTP requests that are already happening, which means you don’t ask it… you don’t send a request to it and it sends a request back. It’s just happening during the request naturally, which also makes it feel less server-y. I don’t know, it’s interesting. It’s interesting for sure. So that’s a big deal, though, that you brought up the pricing thing. That you only pay for what you use. That’s significant too, because… let’s say, you’re a back-end dev, who’s used to spinning up servers their whole life. And they run the costs, “I need this kind of server with this kind of memory and this kind of CPU and these kind of specs. And this is how much it’s going to cost.” Serverless comes along and chops the head off of that pricing.

    Chris: So, even if you’re a back-end dev who just doesn’t like this that much, that they’re just not into it, like your skill set is just what it is over the years, you compare the price and you’re like, “What? I could be paying 1% of what I was paying before?” You are not allowed to not care about that, right? If you’re this back-end dev that’s paying a hundred times more for their service than they need to be paying, you’re just kind of bad at your job then. Sorry to say. This has come along and this has shattered pricing in a lot of ways. You have to care about that. And it’s kind of cool that somebody else is… It’s not like you don’t have to worry about security at all, but it’s not your server. You don’t have… your lambda or cloud function, or your worker, or whatever, isn’t sitting on a server that’s right next to some really sensitive data on your own network. It’s not right next to your database.

    Chris: If somebody writes code that somehow tries to eject itself from the worker or the lambda, or whatever, and try to get access to other things in their way, there’s nothing there to get. So the security’s a big deal too, so again, if that’s your job as the server admin, is to deal with the security of this thing. Running it, running certain things in Lambda, you just get some natural security from it, which is great. So, it’s way cheaper. It’s way more secure. It encourages these small modular architecture, which can be a good idea. It seems to be domino after domino of good ideas here. That’s why it’s notable. You know?

    Drew: Yeah, I mean, traditionally with a server based architecture that we’ve been running for decades on the web, you have a web server that you run yourself. It holds your front-end code, your back-end code, your database and everything. Then you need to maintain that and keep it running and pay the bills, and even if it’s not being used, it’s there clocking up bills. The user would make a request and it would build all that HTML query stuff from the database, send it all down the line to the browser. That process works. It’s how loads of things are built. It’s probably the majority of how the web is built. It’s how things like WordPress work. Is this really a problem that we need to solve? I mean, we’ve talked about costs a little bit. What are the other sort of problems with that, that we’re… that we need to address, and that serverless might help us with?

    Chris: Yeah, the problems with the old school approach. Yeah, I don’t know, maybe there isn’t any. I mean, I’m not saying the whole web needs to change their whole… the whole thing overnight. I don’t know. Maybe it doesn’t really, but I think it opens up doors. It just seems like, when good ideas arrive like this, they just slowly change how the web operates at all. So, if there’s some CMS that is built in some way that expects a database to be there, it means that maybe the hosts of the future will start leveraging this in interesting ways. Maybe it feels to you like it’s still just a traditional server, but the hosts themselves have farmed it out, how they operate, to serverless architectures. So you don’t even really know that that’s happening, but they’ve found a way to slash their costs by hosting the stuff that you need in serverless ways. Maybe yeah don’t even need to care as a developer, but at a meta level, that’s what’s happening. Maybe. I don’t know.

    Chris: It also doesn’t mean that… Databases are still there. If it turns out that architecturally having a relational database is the correct way to store that data, great. I mention that because this world of Serverless is kind of growing up at the same time that JAMstack is. And JAMstack is this architecture that’s, “You should be serving your website off of static hosts, that run nothing at all except for…” They’re like little CDNs. They’re like, “I can do nothing. I don’t run PHP. I don’t run Ruby. I run nothing. I run on a tiny little web server that’s just designed to serve static files only.”

    Chris: “And then, if you need to do more than that, if you need to pull data from a relational database, then please do it at some other time, not at the server time. You can either do it in a build process ahead of time, and pull that stuff out of the database, pre-build static files and I’ll serve those, or do it at runtime.” Meaning you get this shell of a document, and then it makes a JavaScript request to get some data and prefills it then. So you do it ahead of time or after time, but it doesn’t mean, “Don’t use a relational database.” It just means, “Don’t have the server generate it at the time of the request of the document,” which is a… I don’t know, it’s a little bit of a paradigm shift.

    Chris: It’s not just JAMstack either. We’re also living in the time of JavaScript frameworks. We’re living in a time where it’s starting to be a little more expected that the way that a JavaScript application boots up, is that it mounts some components, and as those components mount, it asks for the data that it needs. And so, it can be kind of a natural fit for something like a React website to be like, “Well, I’ll just hit a serverless function to cough up the data that it needs. It hits some JSON API essentially. I get the JSON data that I need and I construct myself out of that data, and then I render onto the page.” Now, whether that’s good or bad for the web, it’s like, “I don’t know. Too bad. Ship has sailed. That’s how a lot of people are building sites.” It’s just client rendered things. So, serverless and modern JavaScript kind of go hand in hand.

    Drew: I suppose you don’t have to wholesale… be looking at one architecture or another. There’s an area in the middle where parts of an infrastructure might be more traditional and parts could be serverless, I’m guessing?

    Chris: Yeah. Well, they’re trying to tell you that anyway. Anybody that wants to sell you any part of their architecture is like, “You don’t have to buy in all right now. Just do it a little bit.” Because of course, they want you to dip your toe into whatever they’re selling, because once you dip the toe, the chances that you splash yourself into the pool is a lot higher. So, I think that… it’s not a lie, though, necessarily, although I find a little less luck in… I don’t want my stack to be a little bit of everything. I think there’s some technical death there that I don’t always want to swallow.

    Drew: Mm (affirmative).

    Chris: But it’s possible to do. I think the most quoted one is… let’s say I have a site that has an eCommerce element to it, which means… and let’s say large scale eCommerce, so 10,000 products or something, that this JAMstack architecture hasn’t gotten to the point where that’s always particularly efficient to rebuild that statically. So, the thinking goes, “Then don’t.” Let that part kind of hydrate naturally with… hit serverless functions and get the data that it needs, and do all that. But the rest of the site, which isn’t… there’s not as many pages, there’s not as much data, you could kind of pre-render or whatever. So a little bit of both.

    Drew: Of course, plenty of people are dealing with legacy systems that… some old database thing that was built in the 2000s that they may be able to stick a sort of JSON API layer on top of…

    Chris: Yeah.

    Drew: … and build something more modern, and perhaps serverless, and then still interact with those legacy systems by sort of gluing it altogether in a weird way.

    Chris: Yeah. I like that though, isn’t it? Aren’t… most websites already exist. How many of us are totally green-fielding websites? Most of us work on some crap that already exists that needs to be dragged into the future for some reason, because I don’t know, developers want to work faster, or you can’t hire anybody in COBOL anymore, or whatever the story is. You know?

    Drew: So terminology wise, we’re talking about JAMstack which is this methodology of running a code pretty much in the browser, serving it from a CDN. So, not having anything dynamic on the server. And then when we talk about serverless, we’re talking about those small bits of functionality that run on their server somewhere else. Is that right? That we were talking about these cloud function kind of-

    Chris: Yeah, I mean, they just happen to be both kind of hot ideas right now. So it’s kind of easy to talk about one and talk about the other. But they don’t necessarily need to be together. You could run a JAMstack site that has nothing to do with serverless anything. You’re just doing it, you just pre-build the site and run it, and you can use serverless without having to care about JAMstack. In fact, CodePen does nothing JAMstack at all. Not that we want to talk about CodePen necessarily, but it’s a Ruby on Rails app. It runs on a whole bunch of AWS EC2 instances and a variety of other architecture to make it happen. But we use serverless stuff whenever we can for whatever we can, because it’s cheap and secure, and just a nice way to work. So, no JAMstack in use at all but serverless all over the place.

    Drew: That’s quite interesting. What sort of tasks are you putting serverless to on CodePen?

    Chris: Well, there’s a whole bunch of things. One of them is, I think, hopefully fairly obvious is, I need… the point of CodePen is that you write each HTML, CSS and JavaScript in the browser and it renders it in front of you, right? But you can pick pre-processor languages as well. Let’s say you like Sass. You turn Sass on in the CSS, and you write Sass. Well, something has to process the Sass. These days, Sass is written in Dart or something.

    Chris: Theoretically, you could do that in the client. But these libraries that do pre-processing are pretty big. I don’t think I want to ship the entire Sass library to you, just to run that thing. I don’t want to… it’s just not, that’s not the right architecture for this necessarily. Maybe it is down the road, I mean, we could talk about offline crap, yada, yada, Web Workers. There’s a million architectural things we could do. But here’s how it does work now, is there’s a lambda. It processes Sass. It has one tiny, tiny, tiny, little job.

    Chris: You send it this blob of Sass and it sends you stuff back, which is the processed CSS, maybe a site map, whatever. It has one tiny little job and we probably pay for that lambda, like four cents or something. Because lambdas are just incredibly cheap and you can hammer it too. You don’t have to worry about scale. You just hit that thing as much as you want and your bill will be astonishingly cheap. There is moments where serverless starts to cross that line of being too expensive. I don’t know what that is, I’m not that master of stuff like that. But generally, any serverless stuff we do, we basically… all nearly count as free, because it’s that cheap. But there’s one for Sass. There’s one for Less. There’s one for Babbel. There’s one for TypeScript. There’s one for… All those are individual lambdas that we run. Here’s some code, give it to the lambda, it comes back, and we do whatever we’re going to do with it. But we use it for a lot more than that, even recently.

    Chris: Here’s an example. Every single Pen on CodePen has a screenshot. That’s kind of cool, right? So, the people make a thing and then we need a PNG or a JPEG, or something of it, so that we can… that way when you tweet it, you get the little preview of it. If you share it in Slack, you get the little preview of it. We use it on the website itself to render… instead of an iframe, if we could detect that the Pen isn’t animated, because an iframe’s image is much lighter, so why not use the image? It’s not animated anyway. Just performance gains like that. So each of those screenshots has a URL to it, obviously. And we’ve architected it so that that URL is actually a serverless function. It’s a worker. And so, if that URL gets hit, we can really quickly check if we’ve already taken that screenshot or not.

    Chris: That’s actually enabled by CloudFlare Workers, because CloudFlare Workers are not just a serverless function, but they have a data store too. They have this thing called key-value store, so the ID of that, we can just check really quick and it’ll be, “True or false, do you have it or not?” If it’s got it, it serves it. And it serves it over CloudFlare, which is super fast to begin with. And then gives you all this ability too. Because it’s an image CDN, you can say, “Well, serve it in the optimal format. Serve it as these dimensions.” I don’t have to make the image in those dimensions. You just put the dimensions in the URL and it comes back as that size, magically. So that’s really nice. If it doesn’t have it, it asks another serverless function to make it really quick. So it’ll make it and then it’ll put it in a bucket somewhere… because you have to have a origin for the image, right? You have to actually host it somewhere usually. So we put it in an S3 bucket real quick and then serve it.

    Chris: So there’s no queuing server, there’s no nothing. It’s like serverless functions manage the creation, storage and serving of these images. And there’s like 50 million or 80 million of them or something. It’s a lot, so it handles that as scale pretty nicely. We just don’t even touch it. It just happens. It all happens super fast. Super nice.

    Drew: I guess it… well, a serverless function is ideally going to suit a task that needs very little knowledge of state of things. I mean, you mentioned CloudFlare’s ability to store key-value pairs to see if you’ve got something cached already or not.

    Chris: Yeah. That’s what they’re trying to solve, though, with those. Those key-value pairs, is that… I think that traditionally was true. They’re like, “Avoid state in the thing,” because you just can’t count on it. And CloudFlare Workers are being like, “Yeah, actually, you can deal with state, to some degree.” It’s not as fancy as a… I don’t know, it’s key values, so it’s a key in a value. It’s not like a nested, relational fancy thing. So there’s probably some limits to that. But this is baby days for this. I think that stuff’s going to evolve to be more powerful, so you do have some ability to do some state-like stuff.

    Drew: And sometimes the limitation, that sort of limited ability to maintain state, or the fact that you have no… you want to maintain no state at all, kind of pushes you into an architecture that gives you this sort of… Well, when we talk about the software philosophy of “Small Pieces Loosely Joined”, don’t we?

    Chris: Mm (affirmative).

    Drew: Where each little component does one thing and does it well. And doesn’t really know about the rest of the ecosystem around it. And it seems that really applies to this concept of serverless functions. Do you agree?

    Chris: Yeah. I think you could have a philosophical debate whether that’s a good idea or not. You know? I think some people like the monolith, as it were. I think there’s possible… there’s ways to overdo this and to make too many small parts that are too hard to test altogether. It’s nice to have a test that’s like, “Oh, I wonder if my Sass function is working. Well, let’s just write a little test for it and make sure that it is.” But let’s say, what matters to the user is some string of seven of those. How do you test all seven of them together? I think that story gets a little more complicated. I don’t know how to speak super intelligently to all that stuff, but I know that it’s not necessarily that, if you roll with all serverless functions that’s automatically a better architecture than any other architecture. I like it. It reasons out to me nicely, but I don’t know that it’s the end-all be-all of all architectures. You know?

    Drew: To me, it feels extremely web-like, in that… this is exactly how HTML works, isn’t it? You deliver some HTML and the browser will then go and fetch your images and fetch your JavaScript and fetch your CSS. It seems like it’s an expansion of that –

    Chris: It’s nice.

    Drew: … sort of idea. But, one thing we know about the web, is it’s designed to be resilient because network’s fragile.

    Chris: Mm (affirmative).

    Drew: How robust is the sort of serverless approach? What happens if something… if one of those small pieces goes away?

    Chris: That would be very bad. You know? It would be a disaster. Your site would go down just like any other server, if it happens to go down, I guess.

    Drew: Are there ways to mitigate that, that are particularly –

    Chris: I don’t know.

    Drew: … suited to this sort of approach, that you’ve come across?

    Chris: Maybe. I mean, like I said, a really super fancy robust thing might be like… let’s say you visit CodePen and let’s say that there’s a JavaScript implementation of Sass and we noticed that you’re on a fairly fast network and that you’re idle right now. Maybe we’ll go grab that JavaScript and we’ll throw it in a service worker. Then, if we detect that the lambda fails, or something, or that you have this thing installed already, then we’ll hit the service worker instead of the lambda, and service workers are able to work offline. So, that’s kind of nice too. That’s interesting. I mean, they are the same language-ish. Service workers are JavaScript and a lot of Cloud functions are JavaScript, so there’s some… I think that’s a possibility, although that… it’s just, that’s some serious technical that… It just scares me to have this chunk of JavaScript that you’ve delivered to how many thousands of user, that you don’t necessarily know what they have, and what version of it they have. Eww, but that’s just my own scarediness. I’m sure some people have done a good job with that type of thing.

    Chris: I actually don’t know. Maybe you know some strategies that I don’t, on resiliency of serverless.

    Drew: I guess there’s a failure mode, a style of failure, that could happen with serverless functions, where you run a function once and it fails, and you can run it a second time immediately afterwards and it would succeed, because it might hit a completely different server. Or whatever the problem was, when that run may not exist on a second request. The issues of an entire host being down is one thing, but maybe there are… you have individual problems with the machine. You have a particular server where its memory has gone bad, and it’s throwing a load of errors, and the first time you hit it, it’s going to fail. Second time, that problem might have been rooted around.

    Chris: Companies that tend to offer this technology, you have to trust them, but they also happen to be the type of companies that… this is their pride. This is the reason why people use them is because they’re reliable. I’m sure people could point to some AWS outages of the past, but they tend to be a little rare, and not super common. If you were hosting your own crap, I bet they got you beat from an SLA percentage kind of level. You know? So it’s not like, “Don’t build in a resilient way,” but generally the type of companies that offer these things are pretty damn reliable. The chances of you going down because you screwed up that function are a lot higher than because their architecture is failing.

    Drew: I suppose, I mean, just like anything where you’re using an API or something that can fail, is just making sure you structure your code to cope with that failure mode, and to know what happens next, rather than just throwing up an error to the user, or just dying, or what have you. It’s being aware of that and asking the user to try again. Or trying again yourself, or something.

    Chris: Yeah, I like that idea of trying more than once, rather than just being, “Oh no. Fail. Abort.” “I don’t know, why don’t you try again there, buddy?”

    Drew: So I mean, when it comes to testing and development of serverless functions, sort of cloud functions, is that something that can be done locally? Does it have to be done in the cloud? Are there ways to manage that?

    Chris: I think there are some ways. I don’t know if the story is as awesome. It’s still a relatively new concept, so I think that that gets better and better. But from what I know, for one thing, you’re writing a fairly normal Node function. Assuming you’re using JavaScript to do this, and I know that on Lambda specifically, they support all kinds of stuff. You can write a fricking PHP Cloud Function. You can write a Ruby Cloud Function. So, I know I’m specifically talking about JavaScript, because I have a feeling that most of these things are JavaScript. Even no matter what language it is, I mean, you can go to your command line locally and execute the thing. Some of that testing is… you just test it like you would any other code. You just call the function locally and see if it works.

    Chris: It’s a little different story when you’re talking about an HTTP request to it, that’s the thing that you’re trying to test. Does it respond to the request properly? And does it return the stuff properly? I don’t know. The network might get involved there. So you might want to write tests at that level. That’s fine. I don’t know. What is the normal story there? You spin up some kind of local server or something that serves it. Use Postman, I don’t know. But there’s… Frameworks try to help too. I know that the serverless “.com”, which is just terribly confusing, but there’s literally a company called Serverless and they make a framework for writing the serverless functions that helps you deploy them.

    Chris: So if you like NPM install serverless, you get their framework. And it’s widely regarded as very good, because it’s just very helpful, but they don’t have their own cloud or whatever. You write these and then it helps you get them to a real lambda. Or it might work with multiple cloud providers. I don’t even know these days, but their purpose of existing is to make the deployment story easier. I don’t know what… AWS is not renowned for their simplicity. You know? There’s all this world of tooling to help you use AWS and they’re one of them.

    Chris: They have some kind of paid product. I don’t even know what it is exactly. I think one of the things they do is… the purpose of using them is for testing, is to have a dev environment that’s for testing your serverless function.

    Drew: Yeah, because I guess, that is quite a big part of the workflow, isn’t it? If you’ve written your JavaScript function, you’ve tested it locally, you know it’s going to do the job. How do you actually pick which provider it’s going to go into and how do you get it onto that service? Now, I mean, that’s a minefield, isn’t it?

    Chris: Yeah. I mean, if you want to use no tooling at all, I think they have a really… like AWS, specifically, has a really rudimentary GUI for the thing. You can paste the code in there and hit save and be like, “Okay, I guess it’s live now.” That’s not the best dev story, but I think you could do it that way. I know CloudFlare workers have this thing called Wrangler that you install locally. You spin it up and it spins up a fake browser on the top and then dev tools below. Then you can visit the URL and it somehow intercepts that and runs your local cloud function against it. Because one of the interesting things about workers is… you know how I described how it… you don’t hit a URL and then it returns stuff. It just automatically runs when you… when it intercepts the URL, like CDN style.

    Chris: So, one of the things it can do is manipulate the HTML on the way through. The worker, it has access to the complete HTML document. They have a jQuery-esque thing that’s like, “Look for this selector. Get the content from it. Replace it with this content. And then continue the request.” So you can mess with code on the way through it. To test that locally, you’re using their little Wrangler tool thing to do that. Also, I think the way we did it was… it’s also a little dangerous. The second you put it live, it’s affecting all your web traffic. It’s kind of a big deal. You don’t want to screw up a worker. You know? You can spin up a dev worker that’s at a fake subdomain, and because it’s CloudFlare, you can… CloudFlare can just make a subdomain anyway. I don’t know. It’s just kind of a nice way to do a… as you’re only affecting sub-domain traffic, not your main traffic yet. But the subdomain’s just a mirror of a production anyway, so that’s kind of a… that’s a testing story there.

    Chris: It brings up an interesting thing, though, to me. It’s like… imagine you have two websites. One of them is… for us it’s like a Ruby on Rails app. Whatever. It’s a thing. But we don’t have a CMS for that. That’s just like… it’s not a CMS, really. I think there’s probably Ruby CMSs, but there’s not any renowned ones. You know? It seems like all the good CMSs are PHP, for some reason. So, you want a quality CMS. Drew, you’ve lived in the CMS market for a long time –

    Drew: Absolutely.

    Chris: … so you know how this goes. Let’s say you want to manage your sites in Perch or whatever, because it’s a good CMS and that’s the proper thing to use to build the kind of pages you want to build. But you don’t want to run them on the same server. Unless you want to manage the pages on one site, but show them on another site. Well, I don’t know, there’s any number of ways to do that. But one JavaScript way could be, “Okay, load the page. There’s an empty div there. Run some JavaScript. Ask the other site for the content of that page and then plunk it out on the new page.” That’s fine, I guess, but now you’re in a client side rendered page. It’s going to be slow. It’s going to have bad SEO, because… Google will see it eventually, but it takes 10 days or something. It’s just a bad story for SEO. It’s not very resilient, because who knows what’s going to happen in the network. It’s not the greatest way to do this kind of “content elsewhere, content on site B, show page of site A”, situation.

    Chris: You could also do it on the server side, though. Let’s say you had… Ruby is capable of granting a network request too, but that’s even scarier because then if something fails on the network, the whole page could die or something. It’s like a nervous thing. I don’t love doing that either. But we did this just recently with a worker, in that we… because the worker’s JavaScript, it can make a fetch request. So, it fetches site A, it finds this div on the page, and then it goes and asks site B for the content. Gets the content. Plugs it into that div, and serves the page before it gets anything. So it looks like a server rendered page, but it wasn’t. It all happened at the… on the edge, at the worker level, at the serverless level.

    Chris: So it’s kind of cool. I think you can imagine a fetch request on the browser probably takes, I don’t know, a second and a half or something. It probably takes a minute to do it. But because these are… site B is hosted on some nice hosting and Cloudflare has some… who knows what kind of super computers they use to do it. They do. Those are just two servers talking to each other, and that fetch request happens just so super duper, duper fast. It’s not limited to the internet connection speed of the user, so that little request takes like two milliseconds to get that data. So it’s kind of this cool way to stitch together a site from multiple sources and have it feel like, and behave like, a server rendered page. I think there’s a cool future to that.

    Drew: Are there any sort of conventions that are sort of springing up around serverless stuff. I’m sort of thinking about how to architect things. Say I’ve got something where I want to do two sort of requests to different APIs. I want to take in a postal address and geocode it against one, and then take those coordinates and send that to a florist who’s going to flower bomb my front yard or something. How would you build that? Would you do two separate things? Or would you turn that into one function and just make the request once from the browser?

    Chris: Mm (affirmative). That’s a fascinating question. I’d probably have an architect function or something. One function would be the one that’s in charge of orchestrating the rest of them. It doesn’t have to be, your website is the hub and it only communicates to this array of single sources. Serverless functions can talk to other serverless functions. So I think that’s somewhat common to have kind of an orchestrator function that makes the different calls and stitches them together, and returns them as one. I think that is probably smart and faster, because you want servers talking to servers, not the client talking to a whole bunch of servers. If it can make one request and get everything that it needs, I think that’s probably generally a good idea-

    Drew: Yeah, that sounds smart. Yep.

    Chris: But I think that’s the ultimate thing. You get a bunch of server nerds talking, they’ll talk about the different approaches to that exact idea in 10 different ways.

    Drew: Yeah. No, that sounds pretty smart. I mean, you mentioned as well that this approach is ideal if you’re using APIs where you’ve got secret information. You’ve got API keys or something that you don’t want to live in the client. Because I don’t know, maybe this florist API charges you $100 dollars every time flower bomb someone.

    Chris: Easily.

    Drew: You can basically use those functions to almost proxy the request and add in the secret information as it goes, and keep it secret. That’s a viable way to work?

    Chris: Yeah, yeah. I think so. I mean, secrets are, I don’t know, they’re interesting. They’re a form of buy in I think to whatever provider you go with, because… I think largely because of source control. It’s kind of like, you could just put your API key right in the serverless function, because it’s just going to a server, right? You don’t even have to abstract it, really. The client will never see that code that executes, but in order for it to get there, there’s probably a source control along the way. It’s probably like you commit to master, and then master… then some kind of deployment happens that makes that thing go to the serverless function. Then you can’t put your API key in there, because then it’s in the repo, and you don’t put your API keys in repos. That’s good advice. Now there’s stuff. We’ve just done… at CodePen recently, we started using this git-crypt thing, which is an interesting way to put keys safely into your repos, because it’s encrypted by the time anybody’s looking at that file.

    Chris: But only locally they’re decrypted, so they’re useful. So it’s just kind of an interesting idea. I don’t know if that helps in this case, but usually, cloud providers of these things have a web interface that’s, “Put your API keys here, and we’ll make them available at runtime of that function.” Then it kind of locks… it doesn’t lock you in forever but it kind of is… it’s not as easy to move, because all your keys are… you put in some input field and some admin interface somewhere.

    Drew: Yeah, I think that’s the way that Netlify manage it.

    Chris: They all do, you know?

    Drew: Yeah. You have the secret environment variables that you can set from the web interface. That seems to work quite nicely.

    Chris: Yeah, right. But then you got to leave… I don’t know, it’s not that big of a deal. I’m not saying they’re doing anything nefarious or anything. How do you deal with those secrets? Well, it’s a hard problem. So they kind of booted it to, I don’t know, “Just put them in this input field and we’ll take care of it for you, don’t worry about it.”

    Drew: Is there anything that you’ve seen that stands out as an obvious case for things that you can do with serverless, that you just couldn’t do with a traditional kind of serverfull approach? Or is it just taking that code and sort of almost deploying it in a different way?

    Chris: It’s probably mostly that. I don’t know that it unlocks any possibility that you just absolutely couldn’t run it any other way. Yeah, I think that’s a fair answer, but it does kind of commoditize it in an interesting way. Like, if somebody writes a really nice serverless function… I don’t know that this exists quite yet, but there could kind of a marketplace, almost, for these functions. Like, I want a really good serverless function that can take a screenshot. That could be an open source project that lots of eyeballs around, that does a tremendously good job of doing it and solves all these weird edge cases. That’s the one I want to use. I think that’s kind of cool. You know? That you can kind of benefit from other people’s experience in that way. I think that will happen more and more.

    Drew: I guess it’s the benefit that we talked about, right at the top, of enabling people who write JavaScript and may have written JavaScript only for the front-end, to expand and use those skills on the back-end as well.

    Chris: Yeah, yeah. I think so, I think that’s… because there’s moments like… you don’t have to be tremendously skilled to know what’s appropriate and what’s not for a website. Like, I did a little tutorial the other week, where there was this glitch uses these… when you save a glitch, they give you a slug for your thing that you built, that’s, “Whiskey, tango, foxtrot. 1,000.” It’s like a clever little thing. The chances of it being unique are super high, because I think they even append a number to it or something too. But they end up being these fun little things. They open source their library that has all those words in it, but it’s like a hundred, thousands of words. The file is huge. You know? It’s megabytes large of just a dictionary of words. You probably learn in your first year of development, “Don’t ship a JavaScript file that’s megabytes of a dictionary.” That’s not a good thing to ship. You know? But Node doesn’t care. You can ship hundreds of them. It’s irrelevant to the speed on a server.

    Drew: Yeah.

    Chris: It doesn’t matter on a server. So, I could be like, “Hmm, well, I’ll just do it in Node then.” I’ll have a statement that says, “Words equal require words,” or whatever, and a note at the top, “Have it randomize a number. Pull it out of the array and return it.” So that serverless function is eight lines of code with a packaged@JSON that pulls in this open source library. And then my front-end code, there’s a URL to the serverless function. It hits that URL. The URL returns one word or a group of words or whatever. You build your own little API for it. And now, I have a really kind of nice, efficient thing. What was nice about that is, it’s so simple. I’m not worried about the security of it. I don’t… you know?

    Chris: It’s just… a very average or beginner JavaScript developer, I think, can pull that off, which is cool. That’s an enabling thing that they didn’t have before. Before, they were like, “Well, here’s a 2MB array of words.” “Oh, I can’t ship that to the client.” “Oh, you’ll just shut down then.” You might hit this wall that’s like, “I just can’t do that part then. I need to ask somebody else to help me with that or just not do it or pick more boring slugs or some…” It’s just, you have to go some other way that is a wall to you, because you couldn’t do it. And now, you’re, “Oh, well, I’ll just…” Instead of having that in my script slash, or in my source slash scripts folder, I’ll put it in my functions folder instead.

    Chris: You kind of like moved the script from one folder to the other. And that one happens to get deployed as a serverless function instead. How cool is that? You know? You’re using the same exact skill set, almost. There’s still some rough edges to it, but it’s pretty close.

    Drew: It’s super cool. You’ve put together a sort of little micro site all about these ideas, haven’t you?

    Chris: Yeah. I was a little early to the game. I was just working on it today, though, because… it gets pull requests. The idea… well, it’s at serverless.css-tricks.com and… there’s a dash in CSS-Tricks, by the way. So it’s a subdomain of CSS-Tricks, and I built it serverlessly too, so this is… CSS-Tricks is like a WordPress site, but this is a static site generator site. All the content of it is in the GitHub repo, which is open-source. So if you want to change the content of the site, you can just submit a poll request, which is nice because there’s been a hundred or so of those over time. But I built all the original content.

    Drew: It’s a super useful place, because it lists… If you’re thinking, “Right, I want to get started with serverless functions,” it lists all the providers who you could try it and…

    Chris: That’s all it is, pretty much, is lists of technology. Yeah.

    Drew: Which is great, because otherwise, you’re just Googling for whatever and you don’t know what you’re finding. Yeah, it’s lists of API providers that help you do these sorts of things.

    Chris: Forms is one example of that, because… so the minute that you choose to… let’s say, you’re going to go JAMstack, which I know that’s not necessarily the point of this, but you see how hand in hand they are. All of a sudden, you don’t have a PHP file or whatever to process that form with. How do you do forms on a JAMstack site? Well, there’s any number of ways to do it. Everybody and their sister wants to help you solve that problem, apparently. So I think if I was the inventor of the word JAMstack, so they try to help you naturally, but you don’t have to use them.

    Chris: In fact, I was so surprised putting this site together. Let’s see. There’s six, nine, twelve, fifteen, eighteen, twenty one, twenty two services out there, that want to help you serverlessly process your forms on this site right now. If you want to be the 23rd, you’re welcome to it, but you have some competition out there. So the idea behind this is that you write a form in HTML, like literally a form element. And then the action attribute of the form, it can’t point anywhere internally, because there’s nothing to point to. You can’t process, so it points externally. It points to whatever they want you to point it to. They’ll process the form and then they tend to do things that you’d expect them to, like send an email notification. Or send a Slack thing. Or then send it to Zapier and Zapier will send it somewhere else. They all have slightly different feature sets and pricing and things, but they’re all trying to solve that problem for you, like, “You don’t want to process your own forms? No problem. We’ll process it for you.”

    Drew: Yeah, it’s a super useful resource. I’d really recommend everyone check it out. It’s serverless.css-tricks.com. So, I’ve been learning all about serverless. What have you been learning about lately, Chris?

    Chris: Well, I’m still very much in this world too and learning about serverless stuff. I had an idea to… I used to play this online role playing game ages ago. I just recently discovered that it’s still alive. It’s a text based medieval fantasy kind of game. I played it when AOL was a thing, because AOL wanted to have these games that you had to be logged on to play it, because they wanted you to spend hours and hours on AOL, so they could send you these huge bills, which was, I’m sure, why they did so well at some point.

    Drew: So billing by the second. Yeah.

    Chris: Yeah. So games was big for them. If they could get you playing games with other people on there. So this game kind of… it didn’t debut there, but it moved to AOL, because I’m sure they got a juicy deal for it, but it was so… I mean, it’s just, couldn’t possibly be nerdier. You’re a dwarven mage and you get rune staff from your leather sheath. And you type commands into it like a terminal. Then the game responds to you. I played that game for a very long time. I was very into it. I got into the community of it and the spirit of it. It was kind of a… it was like I was just alone by myself at my computer, but yet I look back on that time in my life, and be like, “That was a wonderful time in my life.” I was really… I just liked the people and the game and all that. But then I grew up and stopped playing it, because life happens to you.

    Chris: I only found out recently, because somebody started doing a podcast about it again… I don’t know how I came across it, but I just did. I was like, “This game is alive and well in today’s world, are you kidding me? This text based thing.” And I was more than happy to reactivate and get my old characters back and play it. But only to find out that the clients that they have you download for this game, haven’t evolved at all. They are awful. They almost assume that you’re using Windows. There’s just these terribly cheesy poorly rendering… and it’s text based, you think it’d at least have nice typography. No. So I’m like, “I could be involved. I could write a client for this game. Put beautiful typography in it.” Just modernize the thing, and I think the players of the game would appreciate it, but it felt overwhelming to me. “How can I do it?” But I find some open source projects. One of them is like… you can play the game through an actual terminal window, and it uses some open source libs to kind of make a GUI out of a terminal window.

    Drew: Really?

    Chris: I don’t know. So that was kind of cool. I was like, “If they wrote that, there must be code in there to how to connect to the game and get it all going and stuff. So at least I have some starter code.” I was trying to go along the app, “Maybe I’ll do it in Flutter or something,” so the final product app would work on mobile phones and, “I could really modernize this thing.” But then I got overwhelmed. I was like, “Ah, this is too big a… I can’t. I’m busy.” But I found another person who had the same idea and they were way further along with it, so I could just contribute on a design level. And it’s been really fun to work on, but I’ve been learning a lot too, because it’s rare for me to jump into a project that’s somebody else’s baby, and I’m just contributing to a little bit, and that has totally different technology choices than I would have ever picked.

    Chris: It’s an Electron app. They picked that, which is also kind of a cool way to go too, because it’s my web skills… so I’m not learning anything too weird, and it’s cross-platform, which is great. So, I’ve been learning a lot about Electron. I think it’s fun.

    Drew: That’s fascinating. It’s always amazing how little side projects and things that we do for fun, end up being the place where we sometimes learn the most. And learn skills that can then feed back into our sort of daily work.

    Chris: That’s the only way I learn things. I’m dragged into something that… I was like, “They’re not…” It’s rendered with a JavaScript library called Mithril, which is… I don’t know if you’ve ever heard of it, but it’s weird. It’s not… it’s almost like writing React without JSX. You have to “create element” and do all these… but it’s supposed to benchmark way better than it… And it actually kind of matters because in this text based game, the text is just flying. There’s a lot of data manipulation, which is like… you’d think this text based game would be so easy for a browser window to run, but it’s actually kind of not. There’s so much data manipulation happening, that you really have to be really… we have to be conscientious about the speed of the rendering. You know?

    Drew: That’s fascinating-

    Chris: Pretty cool.

    Drew: Yeah. If you, dear listener, would like to hear more from Chris, you can find him on Twitter, where he’s @chriscoyier. Of course, CSS-Tricks can be found at css-tricks.com and CodePen at codepen.io. But most of all, I recommend that you subscribe to the ShopTalk Show podcast if you haven’t already done so, at shoptalkshow.com. Thanks for joining us today, Chris. Do you have any parting words?

    Chris: Smashingpodcast.com. I hope that’s the real URL.

    Smashing Editorial
    (il)

    Source link

    web design

    What Is Serverless? — Smashing Magazine

    08/11/2020

    We’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? Drew McLellan talks to Chris Coyier to find out.

    Today, we’re talking about Serverless architectures. What does that mean, and how does it differ from how we might build sites currently? I spoke to Chris Coyier to find out.

    Show Notes

    Weekly Update

    Transcript

    Photo of Chris CoyierDrew McLellan: He’s a web designer and developer who you may know from CSS-Tricks, a website he started more than 10 years ago and that remains a fantastic learning resource for those building websites. He’s the co-founder of CodePen, the browser based coding playground and community used by front-enders all around the world to share what they make and find inspiration from those they follow. Alongside Dave Rupert is the co-host of ShopTalk Show, a podcast all about making websites. So we know he knows a lot about web development, but did you know he once won a hot dog eating competition using only his charm? My smashing friends, please welcome Chris Coyier. Hello Chris, how are you?

    Chris Coyier: Hey, I’m smashing.

    Drew: I wanted to talk to you today not about CodePen, and I don’t necessarily want to talk to you about CSS-Tricks, which is one of those amazing resources that I’m sure everyone knows appears right at the top of Google Search results when looking for answers about any web dev question. Up pops your face and there’s a useful blog post written by you or one of your guest contributors.

    Chris: Oh, I used to actually do that. There was a… I don’t know, it probably was during the time of when Google had that weird social network. What was that? Google Plus?

    Drew: Oh, Plus, yeah.

    Chris: Yeah, where they would associate a website with a Plus account, and so my Plus account had an avatar, and the avatar was me, so it would show up in search results. I think those days are gone. I think if you…

    Drew: I think so, yeah-

    Chris: Yeah.

    Drew: But I kind of wanted to talk to you about something that has been a little bit more of a sort of side interest of yours, and that’s this concept of serverless architectures.

    Chris: Mm (affirmative).

    Drew: This is something you’ve been learning sort of more about for a little while. Is that right?

    Chris: Yeah, yeah. I’m just a fan. It seems like a natural fit to the evolution of front-end development, which is where I feel like I have, at least, some expertise. I consider myself much more of a… much more useful on the front-end than the back-end, not that I… I do it all these days. I’ve been around long enough that I’m not afraid of looking at a little Ruby code, that’s for sure. But I prefer the front-end. I’ve studied it more. I’ve participated in projects more at that level, and then along comes this little kind of a new paradigm that says, “You can use your JavaScript skills on the server,” and it’s interesting. You know? That’s how I think of it. There’s a lot more to it than that, but that’s why I care, is because I feel it’s like front-end developers have dug so deep into JavaScript. And now we can use that same skill set elsewhere. Mm, pretty cool.

    Drew: Seems like a whole new world has opened up, whereas if you were just a front-end coder… I say, just a front-end coder, I shouldn’t. If you’re a front-end coder, and you’re used to working with a colleague or a friend to help you with the back-end implementation, suddenly that’s opened up. And it’s something that you can manage more of the whole stack yourself.

    Chris: Yeah, yeah. That’s it.

    Drew: Addressing the elephant in the room, right at the top. We’re talking about serverless, and obviously, naming things is hard. We all know that. Serverless architecture doesn’t mean there are no servers, does it?

    Chris: I think it’s mandatory, like if this is the first podcast you’re hearing of it, or in the first… you’re only hearing the word “serverless” in the first dozen times you ever heard it, it’s mandatory that you have a visceral reaction and have this kind of, “Oh, but there are still servers.” That’s okay. If that’s happening to you right now, just know that, that’s a required step in this. It’s just like anything else in life. There’s stages to understanding. The first time you hear something, you’re required to kind of reject it a little bit, and then only after a dozen times or so, or after it’s proven its worth a little bit to you, do you get to enter the further stages of understanding here. But the word has won, so if you’re still fighting against the word “serverless”, I hate to tell you, that the train has left the station there. The word is already successful. You’re not going to win this one. So, sorry.

    Chris: But I do think it’s interesting that… it’s starting to be like, maybe there actually aren’t servers involved sometimes. I would think one of the things that locked serverless in as a concept was AWS Lambda. They were kind of the first on the scene. A lambda is like a function that you give to AWS and it puts it in the magical sky and then… it has a URL, and you can hit it and it will run that function and return something if you want it to. You know? That’s just HTTP or whatever. That’s how it works, which… the first time you hear that, you’re like, “Why? I don’t care.” But then, there’s some obvious things to it. It could know my API keys that nobody else has access to. That’s why you run back-end to begin with, is that it knows secret stuff that doesn’t have to be in the JavaScript on the client side. So if it needs to talk to a database, it can do that. It can do that securely without having to expose API keys elsewhere. Or even where that data is or how it gets it, it’s…

    Chris: So that’s pretty cool. I can write a function that talks to a database, get some data, returns that. Cool. So, Lambda is that, but AWS works. You have to pick a region. You’re like, “I don’t know. Where it should be, Virginia? Oregon? Should I pick the Australia one? I don’t know.” They have 20, 30. I don’t even know how many they have these days, but even lambdas had regions. They, I think, these days have Lambda@Edge, which means it’s all of the regions, which is kind of cool. But they were first, and now everybody’s got something like Lambda. All the cloud services. They want some kind of service in this world. One of them is CloudFlare. CloudFlare has workers. They have way more locations than AWS has, but they executed it kind of at a different time too… the way a CloudFlare worker… it’s similar to a lambda in that you can run Node. You can run JavaScript. You can run a number of other languages too, but… I think of this stuff largely, the most interesting language is JavaScript, just because of the prevalence of it.

    Chris: It happens just at the CDN level, which I guess is a server, but I tend to not think of CDNs as a server. Not as obviously as something else. It’s starting to feel even more serverless-y lately. Is a CDN a server? I mean, I guess it’s a computer somewhere, but it feels like even less server-y.

    Drew: It feels like, yes, a CDN may be a server, but it’s the most sort of minimal version of a server. It’s like a thin server, if you like.

    Chris: Yeah. Sure.

    Drew: All right. I’ve heard it said… I can’t remember the source to credit, unfortunately, but I’ve heard serverless described as being “like using a ride-sharing service like Uber or Lyft” or whatever. You can be carless and not own a car, but that doesn’t mean you never use a car.

    Chris: Yeah, it doesn’t mean cars don’t exist. Mm, that’s nice.

    Drew: You just summon one when you need it, but at the same time, you’re not paying the upfront purchase cost of a car. You’re not paying maintenance or fuel or-

    Chris: Right, and the pricing makes sense, too, right? That’s nice. That’s a nice analogy, I think. And then, because it’s at the CDN level too, it just intercepts HTTP requests that are already happening, which means you don’t ask it… you don’t send a request to it and it sends a request back. It’s just happening during the request naturally, which also makes it feel less server-y. I don’t know, it’s interesting. It’s interesting for sure. So that’s a big deal, though, that you brought up the pricing thing. That you only pay for what you use. That’s significant too, because… let’s say, you’re a back-end dev, who’s used to spinning up servers their whole life. And they run the costs, “I need this kind of server with this kind of memory and this kind of CPU and these kind of specs. And this is how much it’s going to cost.” Serverless comes along and chops the head off of that pricing.

    Chris: So, even if you’re a back-end dev who just doesn’t like this that much, that they’re just not into it, like your skill set is just what it is over the years, you compare the price and you’re like, “What? I could be paying 1% of what I was paying before?” You are not allowed to not care about that, right? If you’re this back-end dev that’s paying a hundred times more for their service than they need to be paying, you’re just kind of bad at your job then. Sorry to say. This has come along and this has shattered pricing in a lot of ways. You have to care about that. And it’s kind of cool that somebody else is… It’s not like you don’t have to worry about security at all, but it’s not your server. You don’t have… your lambda or cloud function, or your worker, or whatever, isn’t sitting on a server that’s right next to some really sensitive data on your own network. It’s not right next to your database.

    Chris: If somebody writes code that somehow tries to eject itself from the worker or the lambda, or whatever, and try to get access to other things in their way, there’s nothing there to get. So the security’s a big deal too, so again, if that’s your job as the server admin, is to deal with the security of this thing. Running it, running certain things in Lambda, you just get some natural security from it, which is great. So, it’s way cheaper. It’s way more secure. It encourages these small modular architecture, which can be a good idea. It seems to be domino after domino of good ideas here. That’s why it’s notable. You know?

    Drew: Yeah, I mean, traditionally with a server based architecture that we’ve been running for decades on the web, you have a web server that you run yourself. It holds your front-end code, your back-end code, your database and everything. Then you need to maintain that and keep it running and pay the bills, and even if it’s not being used, it’s there clocking up bills. The user would make a request and it would build all that HTML query stuff from the database, send it all down the line to the browser. That process works. It’s how loads of things are built. It’s probably the majority of how the web is built. It’s how things like WordPress work. Is this really a problem that we need to solve? I mean, we’ve talked about costs a little bit. What are the other sort of problems with that, that we’re… that we need to address, and that serverless might help us with?

    Chris: Yeah, the problems with the old school approach. Yeah, I don’t know, maybe there isn’t any. I mean, I’m not saying the whole web needs to change their whole… the whole thing overnight. I don’t know. Maybe it doesn’t really, but I think it opens up doors. It just seems like, when good ideas arrive like this, they just slowly change how the web operates at all. So, if there’s some CMS that is built in some way that expects a database to be there, it means that maybe the hosts of the future will start leveraging this in interesting ways. Maybe it feels to you like it’s still just a traditional server, but the hosts themselves have farmed it out, how they operate, to serverless architectures. So you don’t even really know that that’s happening, but they’ve found a way to slash their costs by hosting the stuff that you need in serverless ways. Maybe yeah don’t even need to care as a developer, but at a meta level, that’s what’s happening. Maybe. I don’t know.

    Chris: It also doesn’t mean that… Databases are still there. If it turns out that architecturally having a relational database is the correct way to store that data, great. I mention that because this world of Serverless is kind of growing up at the same time that JAMstack is. And JAMstack is this architecture that’s, “You should be serving your website off of static hosts, that run nothing at all except for…” They’re like little CDNs. They’re like, “I can do nothing. I don’t run PHP. I don’t run Ruby. I run nothing. I run on a tiny little web server that’s just designed to serve static files only.”

    Chris: “And then, if you need to do more than that, if you need to pull data from a relational database, then please do it at some other time, not at the server time. You can either do it in a build process ahead of time, and pull that stuff out of the database, pre-build static files and I’ll serve those, or do it at runtime.” Meaning you get this shell of a document, and then it makes a JavaScript request to get some data and prefills it then. So you do it ahead of time or after time, but it doesn’t mean, “Don’t use a relational database.” It just means, “Don’t have the server generate it at the time of the request of the document,” which is a… I don’t know, it’s a little bit of a paradigm shift.

    Chris: It’s not just JAMstack either. We’re also living in the time of JavaScript frameworks. We’re living in a time where it’s starting to be a little more expected that the way that a JavaScript application boots up, is that it mounts some components, and as those components mount, it asks for the data that it needs. And so, it can be kind of a natural fit for something like a React website to be like, “Well, I’ll just hit a serverless function to cough up the data that it needs. It hits some JSON API essentially. I get the JSON data that I need and I construct myself out of that data, and then I render onto the page.” Now, whether that’s good or bad for the web, it’s like, “I don’t know. Too bad. Ship has sailed. That’s how a lot of people are building sites.” It’s just client rendered things. So, serverless and modern JavaScript kind of go hand in hand.

    Drew: I suppose you don’t have to wholesale… be looking at one architecture or another. There’s an area in the middle where parts of an infrastructure might be more traditional and parts could be serverless, I’m guessing?

    Chris: Yeah. Well, they’re trying to tell you that anyway. Anybody that wants to sell you any part of their architecture is like, “You don’t have to buy in all right now. Just do it a little bit.” Because of course, they want you to dip your toe into whatever they’re selling, because once you dip the toe, the chances that you splash yourself into the pool is a lot higher. So, I think that… it’s not a lie, though, necessarily, although I find a little less luck in… I don’t want my stack to be a little bit of everything. I think there’s some technical death there that I don’t always want to swallow.

    Drew: Mm (affirmative).

    Chris: But it’s possible to do. I think the most quoted one is… let’s say I have a site that has an eCommerce element to it, which means… and let’s say large scale eCommerce, so 10,000 products or something, that this JAMstack architecture hasn’t gotten to the point where that’s always particularly efficient to rebuild that statically. So, the thinking goes, “Then don’t.” Let that part kind of hydrate naturally with… hit serverless functions and get the data that it needs, and do all that. But the rest of the site, which isn’t… there’s not as many pages, there’s not as much data, you could kind of pre-render or whatever. So a little bit of both.

    Drew: Of course, plenty of people are dealing with legacy systems that… some old database thing that was built in the 2000s that they may be able to stick a sort of JSON API layer on top of…

    Chris: Yeah.

    Drew: … and build something more modern, and perhaps serverless, and then still interact with those legacy systems by sort of gluing it altogether in a weird way.

    Chris: Yeah. I like that though, isn’t it? Aren’t… most websites already exist. How many of us are totally green-fielding websites? Most of us work on some crap that already exists that needs to be dragged into the future for some reason, because I don’t know, developers want to work faster, or you can’t hire anybody in COBOL anymore, or whatever the story is. You know?

    Drew: So terminology wise, we’re talking about JAMstack which is this methodology of running a code pretty much in the browser, serving it from a CDN. So, not having anything dynamic on the server. And then when we talk about serverless, we’re talking about those small bits of functionality that run on their server somewhere else. Is that right? That we were talking about these cloud function kind of-

    Chris: Yeah, I mean, they just happen to be both kind of hot ideas right now. So it’s kind of easy to talk about one and talk about the other. But they don’t necessarily need to be together. You could run a JAMstack site that has nothing to do with serverless anything. You’re just doing it, you just pre-build the site and run it, and you can use serverless without having to care about JAMstack. In fact, CodePen does nothing JAMstack at all. Not that we want to talk about CodePen necessarily, but it’s a Ruby on Rails app. It runs on a whole bunch of AWS EC2 instances and a variety of other architecture to make it happen. But we use serverless stuff whenever we can for whatever we can, because it’s cheap and secure, and just a nice way to work. So, no JAMstack in use at all but serverless all over the place.

    Drew: That’s quite interesting. What sort of tasks are you putting serverless to on CodePen?

    Chris: Well, there’s a whole bunch of things. One of them is, I think, hopefully fairly obvious is, I need… the point of CodePen is that you write each HTML, CSS and JavaScript in the browser and it renders it in front of you, right? But you can pick pre-processor languages as well. Let’s say you like Sass. You turn Sass on in the CSS, and you write Sass. Well, something has to process the Sass. These days, Sass is written in Dart or something.

    Chris: Theoretically, you could do that in the client. But these libraries that do pre-processing are pretty big. I don’t think I want to ship the entire Sass library to you, just to run that thing. I don’t want to… it’s just not, that’s not the right architecture for this necessarily. Maybe it is down the road, I mean, we could talk about offline crap, yada, yada, Web Workers. There’s a million architectural things we could do. But here’s how it does work now, is there’s a lambda. It processes Sass. It has one tiny, tiny, tiny, little job.

    Chris: You send it this blob of Sass and it sends you stuff back, which is the processed CSS, maybe a site map, whatever. It has one tiny little job and we probably pay for that lambda, like four cents or something. Because lambdas are just incredibly cheap and you can hammer it too. You don’t have to worry about scale. You just hit that thing as much as you want and your bill will be astonishingly cheap. There is moments where serverless starts to cross that line of being too expensive. I don’t know what that is, I’m not that master of stuff like that. But generally, any serverless stuff we do, we basically… all nearly count as free, because it’s that cheap. But there’s one for Sass. There’s one for Less. There’s one for Babbel. There’s one for TypeScript. There’s one for… All those are individual lambdas that we run. Here’s some code, give it to the lambda, it comes back, and we do whatever we’re going to do with it. But we use it for a lot more than that, even recently.

    Chris: Here’s an example. Every single Pen on CodePen has a screenshot. That’s kind of cool, right? So, the people make a thing and then we need a PNG or a JPEG, or something of it, so that we can… that way when you tweet it, you get the little preview of it. If you share it in Slack, you get the little preview of it. We use it on the website itself to render… instead of an iframe, if we could detect that the Pen isn’t animated, because an iframe’s image is much lighter, so why not use the image? It’s not animated anyway. Just performance gains like that. So each of those screenshots has a URL to it, obviously. And we’ve architected it so that that URL is actually a serverless function. It’s a worker. And so, if that URL gets hit, we can really quickly check if we’ve already taken that screenshot or not.

    Chris: That’s actually enabled by CloudFlare Workers, because CloudFlare Workers are not just a serverless function, but they have a data store too. They have this thing called key-value store, so the ID of that, we can just check really quick and it’ll be, “True or false, do you have it or not?” If it’s got it, it serves it. And it serves it over CloudFlare, which is super fast to begin with. And then gives you all this ability too. Because it’s an image CDN, you can say, “Well, serve it in the optimal format. Serve it as these dimensions.” I don’t have to make the image in those dimensions. You just put the dimensions in the URL and it comes back as that size, magically. So that’s really nice. If it doesn’t have it, it asks another serverless function to make it really quick. So it’ll make it and then it’ll put it in a bucket somewhere… because you have to have a origin for the image, right? You have to actually host it somewhere usually. So we put it in an S3 bucket real quick and then serve it.

    Chris: So there’s no queuing server, there’s no nothing. It’s like serverless functions manage the creation, storage and serving of these images. And there’s like 50 million or 80 million of them or something. It’s a lot, so it handles that as scale pretty nicely. We just don’t even touch it. It just happens. It all happens super fast. Super nice.

    Drew: I guess it… well, a serverless function is ideally going to suit a task that needs very little knowledge of state of things. I mean, you mentioned CloudFlare’s ability to store key-value pairs to see if you’ve got something cached already or not.

    Chris: Yeah. That’s what they’re trying to solve, though, with those. Those key-value pairs, is that… I think that traditionally was true. They’re like, “Avoid state in the thing,” because you just can’t count on it. And CloudFlare Workers are being like, “Yeah, actually, you can deal with state, to some degree.” It’s not as fancy as a… I don’t know, it’s key values, so it’s a key in a value. It’s not like a nested, relational fancy thing. So there’s probably some limits to that. But this is baby days for this. I think that stuff’s going to evolve to be more powerful, so you do have some ability to do some state-like stuff.

    Drew: And sometimes the limitation, that sort of limited ability to maintain state, or the fact that you have no… you want to maintain no state at all, kind of pushes you into an architecture that gives you this sort of… Well, when we talk about the software philosophy of “Small Pieces Loosely Joined”, don’t we?

    Chris: Mm (affirmative).

    Drew: Where each little component does one thing and does it well. And doesn’t really know about the rest of the ecosystem around it. And it seems that really applies to this concept of serverless functions. Do you agree?

    Chris: Yeah. I think you could have a philosophical debate whether that’s a good idea or not. You know? I think some people like the monolith, as it were. I think there’s possible… there’s ways to overdo this and to make too many small parts that are too hard to test altogether. It’s nice to have a test that’s like, “Oh, I wonder if my Sass function is working. Well, let’s just write a little test for it and make sure that it is.” But let’s say, what matters to the user is some string of seven of those. How do you test all seven of them together? I think that story gets a little more complicated. I don’t know how to speak super intelligently to all that stuff, but I know that it’s not necessarily that, if you roll with all serverless functions that’s automatically a better architecture than any other architecture. I like it. It reasons out to me nicely, but I don’t know that it’s the end-all be-all of all architectures. You know?

    Drew: To me, it feels extremely web-like, in that… this is exactly how HTML works, isn’t it? You deliver some HTML and the browser will then go and fetch your images and fetch your JavaScript and fetch your CSS. It seems like it’s an expansion of that –

    Chris: It’s nice.

    Drew: … sort of idea. But, one thing we know about the web, is it’s designed to be resilient because network’s fragile.

    Chris: Mm (affirmative).

    Drew: How robust is the sort of serverless approach? What happens if something… if one of those small pieces goes away?

    Chris: That would be very bad. You know? It would be a disaster. Your site would go down just like any other server, if it happens to go down, I guess.

    Drew: Are there ways to mitigate that, that are particularly –

    Chris: I don’t know.

    Drew: … suited to this sort of approach, that you’ve come across?

    Chris: Maybe. I mean, like I said, a really super fancy robust thing might be like… let’s say you visit CodePen and let’s say that there’s a JavaScript implementation of Sass and we noticed that you’re on a fairly fast network and that you’re idle right now. Maybe we’ll go grab that JavaScript and we’ll throw it in a service worker. Then, if we detect that the lambda fails, or something, or that you have this thing installed already, then we’ll hit the service worker instead of the lambda, and service workers are able to work offline. So, that’s kind of nice too. That’s interesting. I mean, they are the same language-ish. Service workers are JavaScript and a lot of Cloud functions are JavaScript, so there’s some… I think that’s a possibility, although that… it’s just, that’s some serious technical that… It just scares me to have this chunk of JavaScript that you’ve delivered to how many thousands of user, that you don’t necessarily know what they have, and what version of it they have. Eww, but that’s just my own scarediness. I’m sure some people have done a good job with that type of thing.

    Chris: I actually don’t know. Maybe you know some strategies that I don’t, on resiliency of serverless.

    Drew: I guess there’s a failure mode, a style of failure, that could happen with serverless functions, where you run a function once and it fails, and you can run it a second time immediately afterwards and it would succeed, because it might hit a completely different server. Or whatever the problem was, when that run may not exist on a second request. The issues of an entire host being down is one thing, but maybe there are… you have individual problems with the machine. You have a particular server where its memory has gone bad, and it’s throwing a load of errors, and the first time you hit it, it’s going to fail. Second time, that problem might have been rooted around.

    Chris: Companies that tend to offer this technology, you have to trust them, but they also happen to be the type of companies that… this is their pride. This is the reason why people use them is because they’re reliable. I’m sure people could point to some AWS outages of the past, but they tend to be a little rare, and not super common. If you were hosting your own crap, I bet they got you beat from an SLA percentage kind of level. You know? So it’s not like, “Don’t build in a resilient way,” but generally the type of companies that offer these things are pretty damn reliable. The chances of you going down because you screwed up that function are a lot higher than because their architecture is failing.

    Drew: I suppose, I mean, just like anything where you’re using an API or something that can fail, is just making sure you structure your code to cope with that failure mode, and to know what happens next, rather than just throwing up an error to the user, or just dying, or what have you. It’s being aware of that and asking the user to try again. Or trying again yourself, or something.

    Chris: Yeah, I like that idea of trying more than once, rather than just being, “Oh no. Fail. Abort.” “I don’t know, why don’t you try again there, buddy?”

    Drew: So I mean, when it comes to testing and development of serverless functions, sort of cloud functions, is that something that can be done locally? Does it have to be done in the cloud? Are there ways to manage that?

    Chris: I think there are some ways. I don’t know if the story is as awesome. It’s still a relatively new concept, so I think that that gets better and better. But from what I know, for one thing, you’re writing a fairly normal Node function. Assuming you’re using JavaScript to do this, and I know that on Lambda specifically, they support all kinds of stuff. You can write a fricking PHP Cloud Function. You can write a Ruby Cloud Function. So, I know I’m specifically talking about JavaScript, because I have a feeling that most of these things are JavaScript. Even no matter what language it is, I mean, you can go to your command line locally and execute the thing. Some of that testing is… you just test it like you would any other code. You just call the function locally and see if it works.

    Chris: It’s a little different story when you’re talking about an HTTP request to it, that’s the thing that you’re trying to test. Does it respond to the request properly? And does it return the stuff properly? I don’t know. The network might get involved there. So you might want to write tests at that level. That’s fine. I don’t know. What is the normal story there? You spin up some kind of local server or something that serves it. Use Postman, I don’t know. But there’s… Frameworks try to help too. I know that the serverless “.com”, which is just terribly confusing, but there’s literally a company called Serverless and they make a framework for writing the serverless functions that helps you deploy them.

    Chris: So if you like NPM install serverless, you get their framework. And it’s widely regarded as very good, because it’s just very helpful, but they don’t have their own cloud or whatever. You write these and then it helps you get them to a real lambda. Or it might work with multiple cloud providers. I don’t even know these days, but their purpose of existing is to make the deployment story easier. I don’t know what… AWS is not renowned for their simplicity. You know? There’s all this world of tooling to help you use AWS and they’re one of them.

    Chris: They have some kind of paid product. I don’t even know what it is exactly. I think one of the things they do is… the purpose of using them is for testing, is to have a dev environment that’s for testing your serverless function.

    Drew: Yeah, because I guess, that is quite a big part of the workflow, isn’t it? If you’ve written your JavaScript function, you’ve tested it locally, you know it’s going to do the job. How do you actually pick which provider it’s going to go into and how do you get it onto that service? Now, I mean, that’s a minefield, isn’t it?

    Chris: Yeah. I mean, if you want to use no tooling at all, I think they have a really… like AWS, specifically, has a really rudimentary GUI for the thing. You can paste the code in there and hit save and be like, “Okay, I guess it’s live now.” That’s not the best dev story, but I think you could do it that way. I know CloudFlare workers have this thing called Wrangler that you install locally. You spin it up and it spins up a fake browser on the top and then dev tools below. Then you can visit the URL and it somehow intercepts that and runs your local cloud function against it. Because one of the interesting things about workers is… you know how I described how it… you don’t hit a URL and then it returns stuff. It just automatically runs when you… when it intercepts the URL, like CDN style.

    Chris: So, one of the things it can do is manipulate the HTML on the way through. The worker, it has access to the complete HTML document. They have a jQuery-esque thing that’s like, “Look for this selector. Get the content from it. Replace it with this content. And then continue the request.” So you can mess with code on the way through it. To test that locally, you’re using their little Wrangler tool thing to do that. Also, I think the way we did it was… it’s also a little dangerous. The second you put it live, it’s affecting all your web traffic. It’s kind of a big deal. You don’t want to screw up a worker. You know? You can spin up a dev worker that’s at a fake subdomain, and because it’s CloudFlare, you can… CloudFlare can just make a subdomain anyway. I don’t know. It’s just kind of a nice way to do a… as you’re only affecting sub-domain traffic, not your main traffic yet. But the subdomain’s just a mirror of a production anyway, so that’s kind of a… that’s a testing story there.

    Chris: It brings up an interesting thing, though, to me. It’s like… imagine you have two websites. One of them is… for us it’s like a Ruby on Rails app. Whatever. It’s a thing. But we don’t have a CMS for that. That’s just like… it’s not a CMS, really. I think there’s probably Ruby CMSs, but there’s not any renowned ones. You know? It seems like all the good CMSs are PHP, for some reason. So, you want a quality CMS. Drew, you’ve lived in the CMS market for a long time –

    Drew: Absolutely.

    Chris: … so you know how this goes. Let’s say you want to manage your sites in Perch or whatever, because it’s a good CMS and that’s the proper thing to use to build the kind of pages you want to build. But you don’t want to run them on the same server. Unless you want to manage the pages on one site, but show them on another site. Well, I don’t know, there’s any number of ways to do that. But one JavaScript way could be, “Okay, load the page. There’s an empty div there. Run some JavaScript. Ask the other site for the content of that page and then plunk it out on the new page.” That’s fine, I guess, but now you’re in a client side rendered page. It’s going to be slow. It’s going to have bad SEO, because… Google will see it eventually, but it takes 10 days or something. It’s just a bad story for SEO. It’s not very resilient, because who knows what’s going to happen in the network. It’s not the greatest way to do this kind of “content elsewhere, content on site B, show page of site A”, situation.

    Chris: You could also do it on the server side, though. Let’s say you had… Ruby is capable of granting a network request too, but that’s even scarier because then if something fails on the network, the whole page could die or something. It’s like a nervous thing. I don’t love doing that either. But we did this just recently with a worker, in that we… because the worker’s JavaScript, it can make a fetch request. So, it fetches site A, it finds this div on the page, and then it goes and asks site B for the content. Gets the content. Plugs it into that div, and serves the page before it gets anything. So it looks like a server rendered page, but it wasn’t. It all happened at the… on the edge, at the worker level, at the serverless level.

    Chris: So it’s kind of cool. I think you can imagine a fetch request on the browser probably takes, I don’t know, a second and a half or something. It probably takes a minute to do it. But because these are… site B is hosted on some nice hosting and Cloudflare has some… who knows what kind of super computers they use to do it. They do. Those are just two servers talking to each other, and that fetch request happens just so super duper, duper fast. It’s not limited to the internet connection speed of the user, so that little request takes like two milliseconds to get that data. So it’s kind of this cool way to stitch together a site from multiple sources and have it feel like, and behave like, a server rendered page. I think there’s a cool future to that.

    Drew: Are there any sort of conventions that are sort of springing up around serverless stuff. I’m sort of thinking about how to architect things. Say I’ve got something where I want to do two sort of requests to different APIs. I want to take in a postal address and geocode it against one, and then take those coordinates and send that to a florist who’s going to flower bomb my front yard or something. How would you build that? Would you do two separate things? Or would you turn that into one function and just make the request once from the browser?

    Chris: Mm (affirmative). That’s a fascinating question. I’d probably have an architect function or something. One function would be the one that’s in charge of orchestrating the rest of them. It doesn’t have to be, your website is the hub and it only communicates to this array of single sources. Serverless functions can talk to other serverless functions. So I think that’s somewhat common to have kind of an orchestrator function that makes the different calls and stitches them together, and returns them as one. I think that is probably smart and faster, because you want servers talking to servers, not the client talking to a whole bunch of servers. If it can make one request and get everything that it needs, I think that’s probably generally a good idea-

    Drew: Yeah, that sounds smart. Yep.

    Chris: But I think that’s the ultimate thing. You get a bunch of server nerds talking, they’ll talk about the different approaches to that exact idea in 10 different ways.

    Drew: Yeah. No, that sounds pretty smart. I mean, you mentioned as well that this approach is ideal if you’re using APIs where you’ve got secret information. You’ve got API keys or something that you don’t want to live in the client. Because I don’t know, maybe this florist API charges you $100 dollars every time flower bomb someone.

    Chris: Easily.

    Drew: You can basically use those functions to almost proxy the request and add in the secret information as it goes, and keep it secret. That’s a viable way to work?

    Chris: Yeah, yeah. I think so. I mean, secrets are, I don’t know, they’re interesting. They’re a form of buy in I think to whatever provider you go with, because… I think largely because of source control. It’s kind of like, you could just put your API key right in the serverless function, because it’s just going to a server, right? You don’t even have to abstract it, really. The client will never see that code that executes, but in order for it to get there, there’s probably a source control along the way. It’s probably like you commit to master, and then master… then some kind of deployment happens that makes that thing go to the serverless function. Then you can’t put your API key in there, because then it’s in the repo, and you don’t put your API keys in repos. That’s good advice. Now there’s stuff. We’ve just done… at CodePen recently, we started using this git-crypt thing, which is an interesting way to put keys safely into your repos, because it’s encrypted by the time anybody’s looking at that file.

    Chris: But only locally they’re decrypted, so they’re useful. So it’s just kind of an interesting idea. I don’t know if that helps in this case, but usually, cloud providers of these things have a web interface that’s, “Put your API keys here, and we’ll make them available at runtime of that function.” Then it kind of locks… it doesn’t lock you in forever but it kind of is… it’s not as easy to move, because all your keys are… you put in some input field and some admin interface somewhere.

    Drew: Yeah, I think that’s the way that Netlify manage it.

    Chris: They all do, you know?

    Drew: Yeah. You have the secret environment variables that you can set from the web interface. That seems to work quite nicely.

    Chris: Yeah, right. But then you got to leave… I don’t know, it’s not that big of a deal. I’m not saying they’re doing anything nefarious or anything. How do you deal with those secrets? Well, it’s a hard problem. So they kind of booted it to, I don’t know, “Just put them in this input field and we’ll take care of it for you, don’t worry about it.”

    Drew: Is there anything that you’ve seen that stands out as an obvious case for things that you can do with serverless, that you just couldn’t do with a traditional kind of serverfull approach? Or is it just taking that code and sort of almost deploying it in a different way?

    Chris: It’s probably mostly that. I don’t know that it unlocks any possibility that you just absolutely couldn’t run it any other way. Yeah, I think that’s a fair answer, but it does kind of commoditize it in an interesting way. Like, if somebody writes a really nice serverless function… I don’t know that this exists quite yet, but there could kind of a marketplace, almost, for these functions. Like, I want a really good serverless function that can take a screenshot. That could be an open source project that lots of eyeballs around, that does a tremendously good job of doing it and solves all these weird edge cases. That’s the one I want to use. I think that’s kind of cool. You know? That you can kind of benefit from other people’s experience in that way. I think that will happen more and more.

    Drew: I guess it’s the benefit that we talked about, right at the top, of enabling people who write JavaScript and may have written JavaScript only for the front-end, to expand and use those skills on the back-end as well.

    Chris: Yeah, yeah. I think so, I think that’s… because there’s moments like… you don’t have to be tremendously skilled to know what’s appropriate and what’s not for a website. Like, I did a little tutorial the other week, where there was this glitch uses these… when you save a glitch, they give you a slug for your thing that you built, that’s, “Whiskey, tango, foxtrot. 1,000.” It’s like a clever little thing. The chances of it being unique are super high, because I think they even append a number to it or something too. But they end up being these fun little things. They open source their library that has all those words in it, but it’s like a hundred, thousands of words. The file is huge. You know? It’s megabytes large of just a dictionary of words. You probably learn in your first year of development, “Don’t ship a JavaScript file that’s megabytes of a dictionary.” That’s not a good thing to ship. You know? But Node doesn’t care. You can ship hundreds of them. It’s irrelevant to the speed on a server.

    Drew: Yeah.

    Chris: It doesn’t matter on a server. So, I could be like, “Hmm, well, I’ll just do it in Node then.” I’ll have a statement that says, “Words equal require words,” or whatever, and a note at the top, “Have it randomize a number. Pull it out of the array and return it.” So that serverless function is eight lines of code with a packaged@JSON that pulls in this open source library. And then my front-end code, there’s a URL to the serverless function. It hits that URL. The URL returns one word or a group of words or whatever. You build your own little API for it. And now, I have a really kind of nice, efficient thing. What was nice about that is, it’s so simple. I’m not worried about the security of it. I don’t… you know?

    Chris: It’s just… a very average or beginner JavaScript developer, I think, can pull that off, which is cool. That’s an enabling thing that they didn’t have before. Before, they were like, “Well, here’s a 2MB array of words.” “Oh, I can’t ship that to the client.” “Oh, you’ll just shut down then.” You might hit this wall that’s like, “I just can’t do that part then. I need to ask somebody else to help me with that or just not do it or pick more boring slugs or some…” It’s just, you have to go some other way that is a wall to you, because you couldn’t do it. And now, you’re, “Oh, well, I’ll just…” Instead of having that in my script slash, or in my source slash scripts folder, I’ll put it in my functions folder instead.

    Chris: You kind of like moved the script from one folder to the other. And that one happens to get deployed as a serverless function instead. How cool is that? You know? You’re using the same exact skill set, almost. There’s still some rough edges to it, but it’s pretty close.

    Drew: It’s super cool. You’ve put together a sort of little micro site all about these ideas, haven’t you?

    Chris: Yeah. I was a little early to the game. I was just working on it today, though, because… it gets pull requests. The idea… well, it’s at serverless.css-tricks.com and… there’s a dash in CSS-Tricks, by the way. So it’s a subdomain of CSS-Tricks, and I built it serverlessly too, so this is… CSS-Tricks is like a WordPress site, but this is a static site generator site. All the content of it is in the GitHub repo, which is open-source. So if you want to change the content of the site, you can just submit a poll request, which is nice because there’s been a hundred or so of those over time. But I built all the original content.

    Drew: It’s a super useful place, because it lists… If you’re thinking, “Right, I want to get started with serverless functions,” it lists all the providers who you could try it and…

    Chris: That’s all it is, pretty much, is lists of technology. Yeah.

    Drew: Which is great, because otherwise, you’re just Googling for whatever and you don’t know what you’re finding. Yeah, it’s lists of API providers that help you do these sorts of things.

    Chris: Forms is one example of that, because… so the minute that you choose to… let’s say, you’re going to go JAMstack, which I know that’s not necessarily the point of this, but you see how hand in hand they are. All of a sudden, you don’t have a PHP file or whatever to process that form with. How do you do forms on a JAMstack site? Well, there’s any number of ways to do it. Everybody and their sister wants to help you solve that problem, apparently. So I think if I was the inventor of the word JAMstack, so they try to help you naturally, but you don’t have to use them.

    Chris: In fact, I was so surprised putting this site together. Let’s see. There’s six, nine, twelve, fifteen, eighteen, twenty one, twenty two services out there, that want to help you serverlessly process your forms on this site right now. If you want to be the 23rd, you’re welcome to it, but you have some competition out there. So the idea behind this is that you write a form in HTML, like literally a form element. And then the action attribute of the form, it can’t point anywhere internally, because there’s nothing to point to. You can’t process, so it points externally. It points to whatever they want you to point it to. They’ll process the form and then they tend to do things that you’d expect them to, like send an email notification. Or send a Slack thing. Or then send it to Zapier and Zapier will send it somewhere else. They all have slightly different feature sets and pricing and things, but they’re all trying to solve that problem for you, like, “You don’t want to process your own forms? No problem. We’ll process it for you.”

    Drew: Yeah, it’s a super useful resource. I’d really recommend everyone check it out. It’s serverless.css-tricks.com. So, I’ve been learning all about serverless. What have you been learning about lately, Chris?

    Chris: Well, I’m still very much in this world too and learning about serverless stuff. I had an idea to… I used to play this online role playing game ages ago. I just recently discovered that it’s still alive. It’s a text based medieval fantasy kind of game. I played it when AOL was a thing, because AOL wanted to have these games that you had to be logged on to play it, because they wanted you to spend hours and hours on AOL, so they could send you these huge bills, which was, I’m sure, why they did so well at some point.

    Drew: So billing by the second. Yeah.

    Chris: Yeah. So games was big for them. If they could get you playing games with other people on there. So this game kind of… it didn’t debut there, but it moved to AOL, because I’m sure they got a juicy deal for it, but it was so… I mean, it’s just, couldn’t possibly be nerdier. You’re a dwarven mage and you get rune staff from your leather sheath. And you type commands into it like a terminal. Then the game responds to you. I played that game for a very long time. I was very into it. I got into the community of it and the spirit of it. It was kind of a… it was like I was just alone by myself at my computer, but yet I look back on that time in my life, and be like, “That was a wonderful time in my life.” I was really… I just liked the people and the game and all that. But then I grew up and stopped playing it, because life happens to you.

    Chris: I only found out recently, because somebody started doing a podcast about it again… I don’t know how I came across it, but I just did. I was like, “This game is alive and well in today’s world, are you kidding me? This text based thing.” And I was more than happy to reactivate and get my old characters back and play it. But only to find out that the clients that they have you download for this game, haven’t evolved at all. They are awful. They almost assume that you’re using Windows. There’s just these terribly cheesy poorly rendering… and it’s text based, you think it’d at least have nice typography. No. So I’m like, “I could be involved. I could write a client for this game. Put beautiful typography in it.” Just modernize the thing, and I think the players of the game would appreciate it, but it felt overwhelming to me. “How can I do it?” But I find some open source projects. One of them is like… you can play the game through an actual terminal window, and it uses some open source libs to kind of make a GUI out of a terminal window.

    Drew: Really?

    Chris: I don’t know. So that was kind of cool. I was like, “If they wrote that, there must be code in there to how to connect to the game and get it all going and stuff. So at least I have some starter code.” I was trying to go along the app, “Maybe I’ll do it in Flutter or something,” so the final product app would work on mobile phones and, “I could really modernize this thing.” But then I got overwhelmed. I was like, “Ah, this is too big a… I can’t. I’m busy.” But I found another person who had the same idea and they were way further along with it, so I could just contribute on a design level. And it’s been really fun to work on, but I’ve been learning a lot too, because it’s rare for me to jump into a project that’s somebody else’s baby, and I’m just contributing to a little bit, and that has totally different technology choices than I would have ever picked.

    Chris: It’s an Electron app. They picked that, which is also kind of a cool way to go too, because it’s my web skills… so I’m not learning anything too weird, and it’s cross-platform, which is great. So, I’ve been learning a lot about Electron. I think it’s fun.

    Drew: That’s fascinating. It’s always amazing how little side projects and things that we do for fun, end up being the place where we sometimes learn the most. And learn skills that can then feed back into our sort of daily work.

    Chris: That’s the only way I learn things. I’m dragged into something that… I was like, “They’re not…” It’s rendered with a JavaScript library called Mithril, which is… I don’t know if you’ve ever heard of it, but it’s weird. It’s not… it’s almost like writing React without JSX. You have to “create element” and do all these… but it’s supposed to benchmark way better than it… And it actually kind of matters because in this text based game, the text is just flying. There’s a lot of data manipulation, which is like… you’d think this text based game would be so easy for a browser window to run, but it’s actually kind of not. There’s so much data manipulation happening, that you really have to be really… we have to be conscientious about the speed of the rendering. You know?

    Drew: That’s fascinating-

    Chris: Pretty cool.

    Drew: Yeah. If you, dear listener, would like to hear more from Chris, you can find him on Twitter, where he’s @chriscoyier. Of course, CSS-Tricks can be found at css-tricks.com and CodePen at codepen.io. But most of all, I recommend that you subscribe to the ShopTalk Show podcast if you haven’t already done so, at shoptalkshow.com. Thanks for joining us today, Chris. Do you have any parting words?

    Chris: Smashingpodcast.com. I hope that’s the real URL.

    Smashing Editorial
    (il)

    Source link