.NET 6 Web APIs with OpenAPI TypeScript Client Generation

Charles Chen
5 min readNov 25, 2021

--

Building Productive, Dead Simple APIs and Clients with .NET 6, OpenAPI, C#, and TypeScript

It’s the year 2022. You’re building a new web API from the ground up.

Which approach should you pick for building your API?

  1. gRPC?
  2. GraphQL?
  3. REST?

Some would make the case that it’s time to put REST…out to rest, but I beg to differ.

(If you just want to see the code, jump to my sample application on GitHub)

Why Not gRPC?

REST is a style of accessing remote server resources using HTTP semantics. As such, REST itself enforces no schemas unlike a technology such as SOAP and WSDL. While this provides great flexibility in building APIs, it can be challenging in terms of productivity.

gRPC, on the other hand, takes a schema-driven approach and creates strong contracts that can increase productivity.

The problem with gRPC APIs for the web is that it feels like it’s probably still a year or two away. Namely, browser support for HTTP/2 seems incomplete at the moment. For example, building web APIs with gRPC currently requires middleware or a proxy that will upgrade HTTP/1.1 traffic to HTTP/2 to be consumed by a server-side gRPC endpoint.

This is from the gRPC blog…in 2019:

It is currently impossible to implement the HTTP/2 gRPC spec in the browser, as there is simply no browser API with enough fine-grained control over the requests. For example: there is no way to force the use of HTTP/2, and even if there was, raw HTTP/2 frames are inaccessible in browsers.

The blog itself is problematic because the number of posts has dropped off significantly in 2021. So make of that what you will.

My take is that in 2022, I would not choose gRPC for a front-end API.

Why Not GraphQL?

Like gRPC, GraphQL provides a schema-driven approach to building APIs. On top of that, GraphQL provides much richer capabilities for interacting with your back-end APIs.

The problem with GraphQL really boils down to one thing in my opinion: the complexity cliff. As your application approaches a certain level of complexity, your GraphQL layer’s complexity does not scale linearly and you’re quickly facing a cliff that is difficult for a small team to manage. The initial productivity afforded by the schema-driven approach starts to drop off as your team starts to grapple with the challenges around managing performance, security, and scalability in GraphQL.

I think that for large enterprises, the power of GraphQL as a federation layer for APIs and internal endpoints is incredibly powerful. Amazon’s AppSync is a great example as it provides a single entry point to access nearly any resource you have sitting in your AWS deployment. To me, GraphQL makes the most sense for large enterprises who have sprawling systems developed by a myriad of discrete teams. A well-architected, centrally managed GraphQL interface can be the layer that unifies these otherwise disparate systems and endpoints.

For small teams, the operational considerations for getting it right at scale are very challenging based on my experience.

Why REST?

If you’ve read my previous post on accidental complexity and YAGNI, then you know that I have a penchant for simple, stupid, mature technologies that are hard to get wrong and have complexity curves that scale linearly with the application.

To that end, REST is:

  • Proven, battle-tested, and very mature at this point
  • Widely adopted, well known, and easy to hire for
  • Easy to reason about
  • Relatively transparent as far as flow of data and control; it’s just HTTP request/response with little fanfare which makes it easy to profile performance, trace, and secure
  • Well supported in terms of tooling with no specialized tools needed to debug or interact with REST endpoints
  • Not going away any time soon

While there are a variety of places where REST comes up short against GraphQL and gRPC, one of the biggest ones is that REST is a style of interaction with a remote resource over HTTP; it does not prescribe any particular message format or schema for doing so. Without a schema, productivity becomes a challenge in terms of development when interacting with a REST API.

Enter OpenAPI, a Linux Foundation project. It layers a schema on top of REST web services and brings many of the benefits associated with GraphQL and gRPC as far as developer productivity goes. Specifically, it exposes a schema file which allows tooling to automatically generate strongly typed clients, for example. This schema file can also be used to automatically generate documentation using tools like ReDoc, WidderShins, and RapiDoc.

And one extra nice thing about starting with REST is that if you find yourself needing GraphQL in the future, you can always add resolvers to your REST endpoints or generate GraphQL schemas from OpenAPI schemas.

Working with .NET 6 Web APIs and OpenAPI Tooling

To extract the productivity benefits of working with REST APIs, we need some tooling support to:

  1. Automatically generate a schema from our REST APIs
  2. Use that schema to automatically generate client code

The .NET 6 Web API project template ships with OpenAPI support already built in. Our goal is to extend that to first generate a schema file at build time. To do that, we can follow this guide from Khalid Abuhakmeh.

(For a full walkthrough, see my sample GitHub project with .NET 6 and Svelte)

First, install the tooling to generate the schema at build time:

Commands to initialize the tooling for generating the schema at build time.

Then we update the .csproj file to execute the CLI on build:

Line 12–15 add the commands to generate the schema files on build.

In this case, the --output ../web/references/swagger.yaml references a top level directory in a mono-repo setup where our static web front-end client is located.

Now when we build our project, the schema gets automatically generated from our codebase.

Next, we want to be able to generate a TypeScript client and a strongly-typed data model automatically from this schema.

To do so, we’ll need to use the OpenAPI TypeScript Codegen project.

Using yarn (or npm), we simply install the tooling and we can automatically generate our client code:

Yarn commands to generate the TypeScript client

And if we add this to our package.json, we can automatically generate our strongly typed front-end client and data model in one go:

Updated package.json to add a codegen script.

Now when we run a command like yarn run codegen, this will automatically rebuild our API, generate a new schema, and generate a new front-end client and data model.

We can use our client like so:

Using our strongly typed client and data model on the front-end static web site.

This brings the productivity of a code-first approach while providing the benefits of a schema-based approach such as strongly-typed client and data model generation that can boost front-end development productivity. Incorporating documentation tools such as ReDoc or RapiDoc (or the out of the box Swagger UI that ships with .NET Web APIs) further boosts productivity when interacting with the API.

I would argue that REST’s explicitness also aids in productivity as it allows consumers to easily see the capabilities of the API as REST APIs tend to be flatter than GraphQL, for example.

These days, REST doesn’t quite have the cachet of gRPC or GraphQL; however, it is more productive than ever while still being dead simple to build solutions of all sizes for teams of all levels of experience. My favorite part about REST is that it’s very difficult to screw it up while still relatively easy to layer complexity as necessary over time (e.g. proxy with a GraphQL resolver in the future). In other words, it’s the exact opposite of YAGNI.

--

--

Charles Chen
Charles Chen

Written by Charles Chen

Maker of software ▪ Cofounder @ Turas.app ▪ Maker of CodeRev.app ▪ GCP, AWS, Fullstack, AI, Postgres, NoSQL, JS/TS, React, Vue, Node, and C#/.NET

Responses (2)