👉🏼 Click here to Join I ❤️ .NET WhatsApp Channel to get 🔔 notified about new articles and other updates.
Design and Deployment and Scaling Consideration in SignalR

Design and Deployment and Scaling Consideration in SignalR

SignalR

8 Articles

Improve

In this article, let's learn about Design and Deployment and Scaling Considerations in SignalR.

Note: If you have not done so already, I recommend you read the article on Streaming and Authentication and Authorization in SignalR.

Table of Contents

  1. Introduction
  2. Design Considerations
  3. Deployment
  4. Scaling
  5. Summary

Introduction

In our previous article we about Streaming and Authentication and Authorization and how to configure them. Today in this article, lets learn about design, deployment and scaling Considerations in SignalR.

Design Considerations

SignalR can be a game-changer if you can use it, but it's best not to rely on it for your app's main functions if you can help it. Connecting to the hub isn't as dependable or secure as traditional request-response setups. That's why I suggest teaming up SignalR with a tried-and-true API, like REST or gRPC, for your heavy lifting. Let the hub handle notifications and keep the critical stuff, like data handling, to the reliable APIs.

Imagine what happens if the SignalR connection suddenly drops. It's more common than you might realize. One way to handle this is by telling the client to reconnect automatically. By doing this, it'll keep trying to connect, with longer intervals between attempts. But remember, each reconnection means a new connection ID, so it won't be part of any groups anymore.

Also, the connection object has two handy events: onreconnecting, which tells you why the disconnect happened, and onreconnected, which gives you the new connection ID. With these events, you can keep the user informed or adjust the app's state as needed.

Code Sample - SignalR Automatic Reconnect

Here's a cool feature introduced in ASP.NET Core 8 called Stateful Reconnect. When it's turned on and the connection takes a dive, any messages that were supposed to go between server and client get stored in a buffer, just hanging out in memory. And get this. There can be buffers on both sides, client and server. Once the connection gets back on its feet, those buffered messages get delivered. Better late than never, huh? Just so you know, there's a limit to how big the buffer can get on each side it's set to 100 KB by default.

Code Sample - SignalR Stateful Reconnect

Deployment

Once your app is out there, whether it's chilling in the cloud or hanging out on local servers, double-check that WebSockets are switched on. Sometimes, they're turned off by default, so keep an eye out for that.

Also, watch out for connection limits. There's usually a cap on how many WebSocket connections can happen simultaneously. And if you're using other methods besides WebSockets, there might be limits on HTTP requests too.

Oh, and don't forget about sticky sessions. You'll want those turned on when you're scaling out your app, meaning you've got it running on multiple servers to handle lots of requests at once. Keep those sessions sticky for a smooth ride!

Scaling

When your app is up and running in the cloud, making it handle more traffic is usually as easy as adjusting a slider to add more instances. There's this thing called a load balancer that helps out by picking a server for each new request that comes in. It might choose servers one after the other, or use some other smart way to decide.

If you're just using WebSocket, you're all good. Once that connection is set up, it's like there's a direct tunnel between one server and the browser, so no worries there.

But if you're using other methods to send messages, things get tricky without sticky sessions. See, each message is treated like a separate request. So, every time a message is sent, it might land on a different server. And that server might not have a clue about what messages were sent earlier or what's going on.

Sticky sessions come to the rescue! There are a bunch of ways to make this work, but here's the gist: When the first request is sent, the load balancer tags the browser with a cookie, showing which server it used. Then, for future requests, the load balancer checks that cookie and sends them to the same server as before.

Note: Azure SignalR Service is a fully managed service that makes it easy to use SignalR in your apps. It handles the scaling for you, so you don't have to worry about it. You can use it with Azure Functions, Azure Logic Apps, and more.

Summary

In this article, we learnt about Design and Deployment and Scaling Considerations in SignalR. We touched WebSockets, sticky session, etc. With this knowledge, we are now ready to build real-time applications with SignalR.

👉🏼 Click here to Join I ❤️ .NET WhatsApp Channel to get 🔔 notified about new articles and other updates.
  • Signalr
  • Design
  • Deployment
  • Scaling