Concurrent/Parallel HTTP Requests in Go
A common source of high latency in all types of applications (scripts, HTTP servers, CLI tools), is to run HTTP requests in sequence that could be run in parallel.
We’ll be looking at how to achieve this in Go. Examples for concurrency in the Go ecosystem tend to be more abstract or low-level than a close to real-world example of integrating a third party HTTP API.
Table of Contents
For those more interested in a fully working example, the code for this example is available on GitHub:
See also the official example for sync.WaitGroup, it uses HTTP requests.
Why is it important to run requests in parallel when possible?
Let’s look at a scenario with fakestoreapi.com where we load a cart and then the relevant associated products.
The “cart” request has to happen before the “product” requests because until we have the cart, we don’t know which products are contained in it (based on cart.products[].productId
).
Based on these requirements if we make the product requests one after the other (in sequence), we will get a timeline as follows, note the total time spent is the sum of all request times.
gantt title Fetch Cart then Products in sequence axisFormat %S.%Ls todayMarker off tickInterval 500millisecond GET /carts/1 :a1, 2000, 338ms GET /products/1 :after a1, 169ms GET /products/2 :272ms GET /products/3 :290ms
On the other hand, we know that each product request doesn’t require data from the other product requests, that means we could conceivably make all the requests in parallel. This yields the following timeline, note the total time spent is the sum of “cart request time” plus “longest product request time”.
gantt title Fetch Cart then Products in parallel axisFormat %S.%Ls todayMarker off tickInterval 500millisecond GET /carts/1 :a1, 2000, 338ms GET /products/1 :p1, after a1, 169ms GET /products/2 :p2, after a1, 272ms GET /products/3 :p3, after a1, 290ms Speedup (441ms) :active, after p1 p2 p3, 441ms
Pre-requisite: fetching fakestoreapi carts and products
The full working example is available at: github.com/HugoDF/go-parallel-http-requests
We need functions to load data from https://fakestoreapi.com/carts/{cartId}
and https://fakestoreapi.com/products/{productId}
.
In order to help with JSON parsing, we start with type Type struct
declarations that will allow json.Decode
to map JSON fields to Go struct fields using “struct tags” (eg. `json:"id"`
, for more examples of tags, see Go Wiki: Well-known struct tags).
package fakestoreapi
import (
"encoding/json"
"fmt"
"net/http"
)
type CartResponseProductItem struct {
ProductId int `json:"productId"`
Quantity int `json:"quantity"`
}
type CartResponse struct {
Id int `json:"id"`
UserId int `json:"userId"`
Date string `json:"date"`
Products []CartResponseProductItem `json:"products"`
}
type ProductRating struct {
Rate float32 `json:"rate"`
Count int `json:"count"`
}
type ProductResponse struct {
Id int `json:"id"`
Price float32 `json:"price"`
Title string `json:"title"`
Description string `json:"description"`
Category string `json:"category"`
Image string `json:"image"`
Rating ProductRating `json:"rating"`
}
The LoadCart
and LoadProduct
functions are going to look very similar. They do the following in order:
- convert from an
int
id to a full url with a template andfmt.Sprintf
http.Get
the URL, check for a request error, assign the response and ensure the response gets closed if there’s a panic withdefer resp.Body.Close()
- allocate a
parsedResp
variable of the correct type (CartResponse
orProductResponse
) - decode the
resp.Body
intoparsedResp
usingjson.NewDecoder().Decode()
and check for decoding errors - return
parsedResp
That yields the following LoadCart
:
func LoadCart(cartId int) CartResponse {
url := fmt.Sprintf("https://fakestoreapi.com/carts/%d", cartId)
resp, err := http.Get(url)
if err != nil {
panic(err) // @todo log.Fatalf("Error loading Cart URL "%v", error: %v", url, err)
}
defer resp.Body.Close()
var parsedResp CartResponse
decodeErr := json.NewDecoder(resp.Body).Decode(&parsedResp)
if decodeErr != nil {
panic(err) // @todo log.Fatalf("Error decoding LoadCart response %v", err)
}
return parsedResp
}
And the following LoadProduct
:
func LoadProduct(id int) ProductResponse {
url := fmt.Sprintf("https://fakestoreapi.com/products/%d", id)
resp, err := http.Get(url)
if err != nil {
panic(err) // @todo log.Fatalf("Error loading Product URL "%w", error: %w", url, err)
}
defer resp.Body.Close()
var parsedResp ProductResponse
decodeErr := json.NewDecoder(resp.Body).Decode(&parsedResp)
if decodeErr != nil {
panic(err) // @todo log.Fatalf("Error decoding LoadProduct response %w", err)
}
return parsedResp
}
Running HTTP requests in sequence in Go
Reminder that the full example is available at github.com/HugoDF/go-parallel-http-requests/blob/main/fakestoreapi/fakestoreapi.go.
We can use the LoadCart
and LoadProduct
to orchestrate the actual fetching. First we load the cart, then use a for ... range
loop over cartResponse.Products
and call LoadProduct
with each item’s ProductId
.
We also include “start” and “end” timers and a log message with that information and the number of fetched products.
We put this file in fakestoreapi/fakestoreapi.go
.
package fakestoreapi
import (
"encoding/json"
"fmt"
"log/slog"
"net/http"
"time"
)
// no change to type struct definitions
// no change to LoadCart or LoadProduct
// Naive implementation of loading of products, it is done in a blocking for loop.
func LoadCartAndProductsSequential(cartId int) (CartResponse, []ProductResponse) {
start := time.Now()
cartResponse := LoadCart(cartId)
productResponses := make([]ProductResponse, 0, len(cartResponse.Products))
for _, product := range cartResponse.Products {
productRes := LoadProduct(product.ProductId)
productResponses = append(productResponses, productRes)
}
end := time.Now()
duration := end.Sub(start)
slog.Info("LoadCartAndProductsSequential runtime",
"duration", duration,
"cartId", cartResponse.Id,
"len(products)", len(productResponses),
)
return cartResponse, productResponses
}
We also create a go.mod
file with the following
module go-parallel-http-requests
go 1.21.5
Finally we create a main.go
file which imports from fakestoreapi
via our module name (per the go.mod
so “go-parallel-http-requests”) and calls fakestoreapi.LoadCartAndProductsSequential(1)
.
package main
import (
"go-parallel-http-requests/fakestoreapi"
)
func main() {
fakestoreapi.LoadCartAndProductsSequential(1)
}
When we run go run main.go
, we see the following output:
2024/01/04 07:27:19 INFO LoadCartAndProductsSequential runtime duration=1.472817292s cartId=1 len(products)=3
We’ve now shown how to load carts and products from fakestoreapi, but the product requests are done in sequence (one after the other), which leaves some time on the table. Next we’ll see how to parallelise the product requests.
Parallelising requests with Goroutines and channels
Pre-requisite for this section:
- a configured
go.mod
file withmodule go-parallel-http-requests
- fetching-fakestoreapi-carts-and-products
If you hit any issues, the full example is available at github.com/HugoDF/go-parallel-http-requests/blob/main/fakestoreapi/fakestoreapi.go.
We’ll now see how to use Goroutines and channels to parallelise the fetching of products.
We will create a function that operates as follows:
- (no change compared to the sequential approach) Load the cart
- initialise a channel (
productResponsesCh
) - loop through
Cart.Products
generating an anonymous function that is a Goroutine- inside of this
go func(product)
function we call LoadProduct and add the product response to theproductResponsesCh
with the<-
syntax (the “receive” operator)
- inside of this
- we then need to “wait” or “block” for items on the
productResponsesCh
, we do this with afor i...
loop which will attempt to receive from theproductResponsesCh
as many times as we have products in the cart- the data received from the channel is added to a
productResponses
slice
- the data received from the channel is added to a
- finally we log the runtime and some response information using
slog.Info
and returncartResponse, productResponses
// no change to package definition
// no change to imports
// no change to type struct definitions
// no change to LoadCart or LoadProduct
// Load products in parallel, since we know the number of calls, we read from the channel that number of times.
func LoadCartAndProductsExhaustChannel(cartId int) (CartResponse, []ProductResponse) {
start := time.Now()
cartResponse := LoadCart(cartId)
productResponsesCh := make(chan ProductResponse, len(cartResponse.Products))
for _, product := range cartResponse.Products {
go func(product CartResponseProductItem) {
productRes := LoadProduct(product.ProductId)
productResponsesCh <- productRes
// productResponsesCh "receives" productRes
}(product)
}
productResponses := make([]ProductResponse, 0, len(cartResponse.Products))
for i := 0; i < len(cartResponse.Products); i++ {
comm := <-productResponsesCh
productResponses = append(productResponses, comm)
}
end := time.Now()
duration := end.Sub(start)
slog.Info("LoadCartAndProductsExhaustChannel runtime",
"duration", duration,
"cartId", cartResponse.Id,
"len(products)", len(productResponses),
)
return cartResponse, productResponses
}
We modify main.go
to call fakestoreapi.LoadCartAndProductsExhaustChannel(1)
in addition to LoadCartAndProductsSequential
.
package main
import (
"go-parallel-http-requests/fakestoreapi"
)
func main() {
fakestoreapi.LoadCartAndProductsSequential(1)
fakestoreapi.LoadCartAndProductsExhaustChannel(1)
}
When we run go run main.go
, we get output similar to the following (timing values vary depending on fakestoreapi response times). Given similar response times for each API call, LoadCartAndProductsExhaustChannel
total time should be less than LoadCartAndProductsSequential
.
2024/01/04 07:28:21 INFO LoadCartAndProductsSequential runtime duration=1.446134083s cartId=1 len(products)=3
2024/01/04 07:28:21 INFO LoadCartAndProductsExhaustChannel runtime duration=781.319125ms cartId=1 len(products)=3
Notes and limitations
A couple of things to note.
First, if we remove the for i...
loop or comm := <-productResponsesCh
inside the for i...
loop, the function would exit without waiting for the Goroutines to complete, but more on that later.
Second, we can only use the for i...
loop because we know how many API calls we’re making (we’re making 1 API call per item in Cart.Products
). If we were making a number of API calls based on user-provided information (eg. provided by a CLI or a HTTP endpoint), we would not be able to use this approach.
Third, note that we haven’t changed anything about LoadProduct
to enable channels or Goroutines, thanks to first-class support for functions in Go we used an anonymous function as our Goroutine. This lack of a construct like “async/await” contrasts with languages such as JavaScript, C#, Kotlin or Swift where we might have had to modify more of the HTTP fetching code in order to parallelise it.
Finally, the “receive from channel (<-
) a set number of times” approach can wait forever (deadlock) if any of the Goroutines fail. In our case we call panic()
in LoadProduct
if the API call fails so we would crash instead, but in a case where the errors are handled in LoadProduct
, by being returned. In other words N API call failures would cause LoadCartAndProductsExhaustChannel
to wait forever since we would only have N - Count(API call failure)
“receivable”’s on the channel.
We’ve now seen how to use Goroutines, channels and a for i...
loop to load products in parallel. Next we’ll see how we can use sync.WaitGroup
from the Go standard library to achieve the same outcome with more flexibility and robustness
Increasing flexibility and robustness with sync.WaitGroup
The more canonical and idiomatic way to wait for concurrent operations to complete in Go is to use a sync.WaitGroup
instance, see sync.WaitGroup
docs, including an example.
Adopting a WaitGroup, our approach becomes:
- (no change compared to the sequential approach) Load the cart
- Initialise a
sync WaitGroup
instance (as awg
variable) - (no change compared to the “exhaust channel” approach) initialise a channel (
productResponsesCh
) - loop through
Cart.Products
generating an anonymous function that is a Goroutine- in the loop, outside of
go func(product)
, we callwg.Add(1)
, this means theWaitGroup
is aware that we have 1 more operation that needs to be “waited on” - inside of
go func(product)
function we- set
defer wg.Done()
, thedefer
part means thatwg.Done()
will run regardless of panics,wg.Done()
notifies theWaitGroup
that one of the operations has completed (conceptually the opposite ofwg.Add(1)
) - (no change compared to the “exhaust channel” approach) call LoadProduct and add the product response to the
productResponsesCh
with the<-
syntax (the “receive” operator)
- set
- in the loop, outside of
- we then wait on the WaitGroup, using
WaitGroup.Wait()
- we won’t execute code go beyond this line unless
wg.Done()
has been called the same number of timeswg.Add(1)
has been called
- we won’t execute code go beyond this line unless
- we call
close(productResponsesCh)
, this allows us to use afor ... range productResponsesCh {}
(range over channel) syntax - range over the
productResponsesCh
values, adding them toproductResponses
slice - finally we log the runtime and some response information using
slog.Info
and returncartResponse, productResponses
// no change to package definition
import (
"encoding/json"
"fmt"
"log/slog"
"net/http"
"sync"
"time"
)
// no change to type struct definitions
// no change to LoadCart or LoadProduct
// Load products in parallel, synchronise using a WaitGroup. This ensures we collect results even if one of the calls fails.
func LoadCartAndProductsWaitGroup(cartId int) (CartResponse, []ProductResponse) {
start := time.Now()
cartResponse := LoadCart(cartId)
var wg sync.WaitGroup
productResponsesCh := make(chan ProductResponse, len(cartResponse.Products))
for _, product := range cartResponse.Products {
wg.Add(1)
go func(product CartResponseProductItem) {
defer wg.Done()
productRes := LoadProduct(product.ProductId)
productResponsesCh <- productRes
}(product)
}
wg.Wait()
close(productResponsesCh)
productResponses := make([]ProductResponse, 0, len(cartResponse.Products))
for chValue := range productResponsesCh {
productResponses = append(productResponses, chValue)
}
end := time.Now()
duration := end.Sub(start)
slog.Info("LoadCartAndProductsWaitGroup runtime",
"duration", duration,
"cartId", cartResponse.Id,
"len(products)", len(productResponses),
)
return cartResponse, productResponses
}
We can modify main.go
to call fakestoreapi.LoadCartAndProductsWaitGroup(1)
.
package main
import (
"go-parallel-http-requests/fakestoreapi"
)
func main() {
fakestoreapi.LoadCartAndProductsSequential(1)
fakestoreapi.LoadCartAndProductsExhaustChannel(1)
fakestoreapi.LoadCartAndProductsWaitGroup(1)
}
When we run go run main.go
, we get output similar to the following (timing values vary depending on fakestoreapi response times). Given similar response times for each API call, LoadCartAndProductsExhaustChannel
and LoadCartAndProductsWaitGroup
total time should be similar (both of which should be less than LoadCartAndProductsSequential
total time).
2024/01/04 07:28:45 INFO LoadCartAndProductsSequential runtime duration=1.18411375s cartId=1 len(products)=3
2024/01/04 07:28:45 INFO LoadCartAndProductsExhaustChannel runtime duration=726.673958ms cartId=1 len(products)=3
2024/01/04 07:28:46 INFO LoadCartAndProductsWaitGroup runtime duration=669.639458ms cartId=1 len(products)=3
One of the key benefits of the WaitGroup is in situations when we don’t know the number of requests that need to be made, as long as we have defer wg.Done()
we can add as many parallel Goroutines as needed and WaitGroup.Wait()
will block execution.
As mentioned in the previous section, defer wg.Done()
makes the code robust to failures and panics, the “synchronisation/wait” code is decoupled from the “data management” code.
Finaly reminder that the full example is available at github.com/HugoDF/go-parallel-http-requests/blob/main/fakestoreapi/fakestoreapi.go.
Get The Jest Handbook (100 pages)
Take your JavaScript testing to the next level by learning the ins and outs of Jest, the top JavaScript testing library.
orJoin 1000s of developers learning about Enterprise-grade Node.js & JavaScript