Recently, I’ve been reviewing Go’s concurrency patterns, best practices, and tools, so I decided to write a blog post about it. In this one, we’re not going to start from scratch or explain what a goroutine or channel is — I already did a deep dive on that in another post I wrote. Instead, we’ll focus on how to use them effectively and what the Go team’s philosophy is behind concurrency.
First, we’ll talk about Go’s concurrency model, then explore the tools we have, and finally check out some snippets and examples of well-known patterns.
The first topic in almost every concurrency blog or book is the difference between parallelism and concurrency. So, I’m not going to get into that again — I’ll just mention Rob Pike’s famous quote on it.
Concurrency is about dealing with lots of things at once. Parallelism is about doing lots
of things at once. -Rob Pike
Go’s concurrency is based on a technique called CSP (Communicating Sequential Processes), which is the key difference between Go and many other languages — and what makes it truly unique.
You can find the paper online, but in short, it basically says that in concurrent programs, instead of relying on memory synchronization patterns like locks, you should focus on fixing data races by designing the data flow between processes so they run in the correct order. That’s the reason channels exist.
Go Concurrency tools
Let’s do a quick review of Go’s concurrency tools and syntax.
Goroutines
go func(){
time.Sleep(1 * time.Second)
fmt.Println("this will run later")
}()
fmt.Println("actual function")Goroutines in Go are lightweight, independently executing functions managed by the Go runtime, allowing easy concurrency. They run in the same address space but are much cheaper than OS threads, with the Go scheduler efficiently managing thousands or even millions of them. You start a goroutine simply by prefixing a function call with the go keyword, enabling concurrent execution without complex thread management.
Channels
Channels in Go are typed conduits that let goroutines communicate and synchronize by sending and receiving values. They help safely share data without explicit locks. You can create a channel using make, send values into it with <-, and receive values from it the same way. For example:
ch := make(chan int)
go func() { ch <- 42 }() // send value to channel
val := <-ch // receive value from channel
fmt.Println(val) // Output: 42
Select Statement
The select statement in Go lets a goroutine wait on multiple channel operations simultaneously, executing the first one that’s ready. It’s like a switch, but for channels, enabling responsive and non-blocking communication. For example:
ch1 := make(chan string)
ch2 := make(chan string)
go func() { ch1 <- "hello" }()
go func() { ch2 <- "world" }()
select {
case msg1 := <-ch1:
fmt.Println("Received:", msg1)
case msg2 := <-ch2:
fmt.Println("Received:", msg2)
default:
fmt.Println("No message yet")
}
Here, select waits for either ch1 or ch2 to send a value and executes the corresponding case, helping coordinate multiple concurrent operations efficiently.
Wait Group
A WaitGroup in Go is used to wait for multiple goroutines to finish their work before continuing. It’s part of the sync package and helps coordinate concurrent execution. You call Add to set the number of goroutines, Done when a goroutine completes, and Wait to block until all are done. For example:
var wg sync.WaitGroup
wg.Add(2)
go func() {
defer wg.Done()
fmt.Println("Task 1 done")
}()
go func() {
defer wg.Done()
fmt.Println("Task 2 done")
}()
wg.Wait()
fmt.Println("All tasks completed")Mutex
A Mutex in Go, short for mutual exclusion lock, is used to protect shared data from concurrent access by multiple goroutines. It ensures that only one goroutine can access a critical section of code at a time, preventing race conditions. You lock it before accessing shared data and unlock it afterward. For example:
var mu sync.Mutex
counter := 0
for i := 0; i < 3; i++ {
go func() {
mu.Lock()
counter++
mu.Unlock()
}()
}
time.Sleep(time.Second)
fmt.Println("Counter:", counter)Here, the Mutex ensures that only one goroutine increments counter at a time, keeping the data consistent.
Patterns
Now that we’ve reviewed the tools, let’s move on to the patterns.
OR Channel
In some cases, we need to wait for multiple channels to respond, and as soon as one of them closes or sends a response, we want to take that as the result — basically, we want to “OR” the channels together. For this, we can use an OR-channel. Here’s a simple implementation of the OR-channel:
func or(channels ...<-chan any) <-chan any {
switch len(channels) {
case 0:
return nil
case 1:
return channels[0]
}
orDone := make(chan any)
go func() {
defer close(orDone)
switch len(channels) {
case 2:
select {
case <-channels[0]:
case <-channels[1]:
}
default:
select {
case <-channels[0]:
case <-channels[1]:
case <-channels[2]:
case <-or(append(channels[3:], orDone)...):
}
}
}()
return orDone
}
As you can see, we use select to wait on all of the channels, and we use a recursive pattern to handle any number of inputs. In this function, if one of the channels closes, the orDone channel will be closed as well. Here’s an example of how we can use it:
sig := func(after time.Duration) <-chan any {
c := make(chan any)
go func() {
defer close(c)
time.Sleep(after)
}()
return c
}
start := time.Now()
<-or(sig(2*time.Second), sig(3*time.Second), sig(1*time.Second))
fmt.Println("Done after:", time.Since(start))
Tee Channel
Now, what if we want to duplicate data from one input channel into two or more channels, so multiple goroutines can consume the same stream of data independently? For this, we can use a tee channel. It’s called a “tee” because it works like the Linux tee command, which splits data to multiple destinations.
func tee(in <-chan int) (<-chan int, <-chan int) {
out1 := make(chan int)
out2 := make(chan int)
go func() {
defer close(out1)
defer close(out2)
for v := range in {
// send to both channels
v1, v2 := v, v
out1 <- v1
out2 <- v2
}
}()
return out1, out2
}
func main() {
in := make(chan int)
go func() {
defer close(in)
for i := 1; i <= 3; i++ {
in <- i
}
}()
out1, out2 := tee(in)
for v1 := range out1 {
v2 := <-out2
fmt.Println("out1:", v1, "out2:", v2)
}
}
This pattern lets you fan out data to multiple consumers without losing values — each output channel receives the same sequence of data from the input.
Fan Out / Fan In
These two must be the most famous patterns — they help distribute and combine tasks and data across multiple goroutines.
Fan-Out
Fan-out means starting multiple goroutines, all of which read from a single channel. It’s kind of like multiplexing. This is mostly used in worker patterns and task dispatching, allowing work to be done concurrently or in parallel. Here’s an example:
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, j)
time.Sleep(time.Second)
results <- j * 2
}
}
func main() {
jobs := make(chan int, 5)
results := make(chan int, 5)
// Start 3 workers (Fan-Out)
for i := 1; i <= 3; i++ {
go worker(i, jobs, results)
}
for j := 1; j <= 5; j++ {
jobs <- j
}
close(jobs)
for a := 1; a <= 5; a++ {
fmt.Println("Result:", <-results)
}
}
Fan-In
Fan-in is the opposite — it merges multiple output channels into a single channel, allowing you to collect results from one place. It’s useful when you have multiple data sources or are running concurrent jobs and want to consolidate the results. Here’s an example:
func generator(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
func fanIn(cs ...<-chan int) <-chan int {
out := make(chan int)
var wg sync.WaitGroup
wg.Add(len(cs))
for _, c := range cs {
go func(c <-chan int) {
for n := range c {
out <- n
}
wg.Done()
}(c)
}
go func() {
wg.Wait()
close(out)
}()
return out
}
func main() {
ch1 := generator(1, 2, 3)
ch2 := generator(4, 5, 6)
// Merge channels (Fan-In)
merged := fanIn(ch1, ch2)
for n := range merged {
fmt.Println(n)
}
}
Worker Pool
Worker pool is a common concurrency pattern where a fixed number of workers process tasks independently, and you can collect results from them. Here, we can use both fan-out and fan-in patterns: fan-out to dispatch tasks among workers, and fan-in to gather results from them. Here’s an example:
package main
import (
"fmt"
"sync"
"time"
)
// Worker function
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Printf("Worker %d started job %d\n", id, j)
time.Sleep(time.Second) // simulate work
fmt.Printf("Worker %d finished job %d\n", id, j)
results <- j * 2
}
}
func main() {
const numJobs = 5
const numWorkers = 3
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
var wg sync.WaitGroup
// Start worker goroutines
for w := 1; w <= numWorkers; w++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
worker(id, jobs, results)
}(w)
}
// Send jobs
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs) // no more jobs
// Wait for all workers to finish
wg.Wait()
close(results) // close results channel
// Collect results
for r := range results {
fmt.Println("Result:", r)
}
}
Pipeline
The pipeline pattern is a concurrency design where data flows through a series of stages, with each stage performing a task and passing its output to the next. The idea is to break a large, single-step operation into multiple concurrent steps. Here’s an example:
package main
import (
"fmt"
"time"
)
// Stage 1: Generate numbers
func generator(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
// Stage 2: Multiply numbers by 2
func multiplyByTwo(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n * 2
}
close(out)
}()
return out
}
// Stage 3: Add 1 to numbers
func addOne(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n + 1
}
close(out)
}()
return out
}
func main() {
start := time.Now()
// Build pipeline: generator -> multiply -> add
nums := generator(1, 2, 3, 4, 5)
stage2 := multiplyByTwo(nums)
stage3 := addOne(stage2)
// Collect results
for result := range stage3 {
fmt.Println(result)
}
fmt.Println("Pipeline completed in:", time.Since(start))
}
Context
Now, let’s talk about the context package. Suppose I have a concurrent application with many processes and goroutines, and I want to stop the app gracefully. What should I do? Should I create a channel and have every task listen to it? Or maybe I want to stop all of them after a certain amount of time? Or perhaps I want to share data between them? This is where context comes in handy.
- Cancellation – Stop goroutines when a task is no longer needed.
- Timeout/Deadline – Automatically cancel operations after a time limit.
- Values – Pass request-scoped data (like user ID or trace ID) safely across goroutines.
heres and example to use context to cancel workers
package main
import (
"context"
"fmt"
"time"
)
func worker(ctx context.Context) {
for {
select {
case <-ctx.Done(): // listen for cancellation
fmt.Println("Worker stopped:", ctx.Err())
return
default:
fmt.Println("Working...")
time.Sleep(500 * time.Millisecond)
}
}
}
func main() {
// Create a context with timeout
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel() // ensure resources are released
go worker(ctx)
// Wait for context to expire
<-ctx.Done()
fmt.Println("Main done:", ctx.Err())
}
with this approach we can realize the operation stopped and we can do clean ups
Error Propagation
In concurrent applications, we have multiple processes, and any of them can encounter errors. It’s a bit different from normal coding: in a typical approach, we could just return the error and stop the flow. But in concurrent programs, things are different, and there are multiple ways to handle errors. Maybe we want to collect both successful and failed results, or perhaps we want to stop everything immediately, or we might wait for all tasks to finish and then return the errors. That’s where error propagation comes in handy.
The first approach is to use channels to propagate errors. Here’s an example:
func worker(id int, jobs <-chan int, errs chan<- error) {
for j := range jobs {
if j%2 == 0 {
errs <- fmt.Errorf("worker %d: cannot process %d", id, j)
return
}
fmt.Println("Worker", id, "processed", j)
}
errs <- nil
}
func main() {
jobs := make(chan int, 3)
errs := make(chan error, 3)
go worker(1, jobs, errs)
jobs <- 2
jobs <- 3
close(jobs)
if err := <-errs; err != nil {
fmt.Println("Error occurred:", err)
}
}
or maybe cancel the whole process using context
ctx, cancel := context.WithCancel(context.Background())
errs := make(chan error)
go func() {
// simulate an error
errs <- fmt.Errorf("something went wrong")
}()
go func() {
select {
case <-ctx.Done():
fmt.Println("Worker stopped due to cancellation")
}
}()
err := <-errs
if err != nil {
cancel() // propagate cancellation to other goroutines
fmt.Println("Error propagated:", err)
}
Be careful to close goroutines and channels when canceling jobs, as failing to do so can easily lead to goroutine leaks.
Conclusion
Go’s concurrency model—built around goroutines, channels, and context—makes it both simple and powerful to write highly concurrent and efficient programs. Patterns like fan-out/fan-in, worker, and pipeline help structure concurrent workflows cleanly, while tools like WaitGroups, Mutexes, and contexts ensure proper synchronization, cancellation, and safety. By combining these patterns thoughtfully and handling error propagation carefully, you can build scalable, reliable systems that make full use of Go’s concurrency strengths. In short, Go turns concurrency from a complex challenge into an elegant, practical tool for real-world development.